Data Science & Big Data Analytics




About The Book.

Data Science & Big Data Analytics Discovering, Analyzing, Visualizing and Presenting Data EMC Education Services WILEY


Data Science & Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data Published by John Wiley & Sons, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 www. wiley. com Copyright© 2015 by John Wiley & Sons, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN: 978-1-118-87613-8 ISBN: 978-1-118-87622-0 (ebk) ISBN: 978-1-118-87605-3 (ebk) Manufactured in the United States of America ' 10987654321 No part ofthis publication may be reproduced, stored in aretrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permis- sion of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley &Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http: I /www. wiley. com/go/permissions. limit ofliability/DisclaimerofWarranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents ofthis work and specifically disclaim all warranties, including without limitation warranties of fitness for aparticular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services ofacompetent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as acitation and/or apotential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that Internet websites listed in this work may have changed or disappeared between when this work was written and when it is read. For general information on our other products and services please contact our Customer Care Department within the United States at (877) 762-2974, outside the United States at (317) 572-3993 orfax (317) 572-4002. Wiley publishes in avariety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand.lfthis book refers to media such as aCD or DVD that is not included in the version you purchased, you may download this material at http: I /booksupport. wiley. com. For more information about Wiley products, visit www. wiley. com. library ofCongress Control Number: 2014946681 Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks ofJohn Wiley & Sons, Inc. and/or its affiliates, in the United States and other coun- tries, and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.


 

Credits Professional Technology and Strategy Director Barry Pruett Executive Editor Business Manager Carol Long Amy Knies Project Editor Associate Publisher Kelly Talbot Jim Minatel Production Manager Project Coordinator, Cover Kathleen Wisor Patrick Redmond Copy Editor Proofreader Karen Gill Nancy Carrasco Manager of Content Development Indexer and Assembly Johnna Van Hoose Dinse Mary Beth Wakefield Cover Designer Marketing Director Mallesh Gurram David Mayhew Marketing Manager Carrie Sherrill


About the Key Contributors David Dietrich heads the data science education team within EMC Education Services, where he leads the curriculum, strategy and course development related to Big Data Analytics and Data Science. He co-au- thored the first course in EMC's Data Science curriculum, two additional EMC courses focused on teaching leaders and executives about Big Data and data science, and is a contributing author and editor of this book. He has filed 14 patents in the areas of data science, data privacy, and cloud computing. David has been an advisor to severa l universities looking to develop academic programs related to data analytics, and has been a frequent speaker at conferences and industry events. He also has been a a guest lecturer at universi- ties in the Boston area. His work has been featured in major publications including Forbes, Harvard Business Review, and the 2014 Massachusetts Big Data Report, commissioned by Governor Deval Patrick. Involved with analytics and technology for nearly 20 years, David has worked with many Fortune 500 companies over his career, holding multiple rolesinvolving analytics, including managing ana lytics and operations teams, delivering analytic con- sulting engagements, managing a line of analytical software products for regulating the US banking industry, and developing Sohware-as-a-Service and BI-as-a-Service offerings. Additionally, David collaborated with the U.S. Federal Reserve in develop- ing predictive modelsfor monitoring mortgage portfolios. Barry Heller isan advisory technical education consultant at EMC Education Services. Barry is a course developer and cu r- riculum advisor in the emerging technology areas of Big Data and data science. Prior to his current role, Barry was a consul- tant research scientist leading numerous analytical initiatives within EMC's Total Customer Experience organization. Early in his EMC career, he managed the statistical engineering group as well as led the data warehousing efforts in an Enterprise Resource Planning (ERP) implementation. Prior to joining EMC, Barry held managerial and analytical rolesin reliability engineering functions at medical diagnostic and technology companies. During his career, he has applied his quantitative skill set to a myriad of business applications in the Customer Service, Engineering, Manufacturing, Sales/Marketing, Finance, and Legal arenas. Underscoring the importance of strong executive stakeholder engagement, many of his successes have resulted from not only focusing on the technical details of an analysis, but on the decisions that will be resulting from the analysis. Barry earned a B.S. in Computational Mathematics from the Rochester Institute ofTechnology and an M.A. in Mathematics from the State University of New York (SUNY) New Paltz. Beibei Yang is aTechnical Education Consultant of EMC Education Services, responsible for developing severa l open courses at EMC related to Data Science and Big Data Analytics. Beibei has seven years of experience in the IT industry. Prior to EMC she worked as a sohware engineer, systems manager, and network manager for a Fortune 500 company where she introduced new technologies to improve efficiency and encourage collaboration. Beibei has published papers to prestigious conferences and has filed multiple patents. She received her Ph.D. in computer science from the University of Massachusetts Lowell. She has a passion toward natural language processing and data mining, especially using various tools and techniques to find hidden patterns and tell storieswith data. Data Science and Big Data Analytics is an exciting domain where the potential of digital information is maximized for making intelligent business decisions. We believe that this is an area that will attract a lot of talented students and professionalsin the short, mid, and long term.


Acknowledgments EMC Education Services embarked on learning this subject with the intent to develop an \"open\" curriculum and certification. It was a challenging journey at the time as not many understood what it would take to be a true data scientist. After initial research (and struggle), we were able to define what was needed and attract very talented professionals to work on the project. The course, \"Data Science and Big Data Analytics,\" has become well accepted across academia and the industry. Led by EMC Education Services, this book is the result of efforts and contributions from a number of key EMC organizations and supported by the office of the CTO, IT, Global Services, and Engineering. Many sincere thanks to many key contributors and subject matter experts David Dietrich, Barry Heller, and Beibei Yang for their work developing content and graphics for the chapters. A special thanks to subject matter experts John Cardente and Ganesh Rajaratnam for their active involvement reviewing multiple book chaptersand providing valuable feedback throughout the project. We are also grateful to the fol lowing experts from EMC and Pivotal for their support in reviewing and improving the content in this book: Aidan O'Brien Joe Kambourakis Alexander Nunes Joe Milardo Bryan Miletich John Sopka Dan Baskette Kathryn Stiles Daniel Mepham Ken Taylor Dave Reiner Lanette Wells Deborah Stokes Michael Hancock Ellis Kriesberg Michael Vander Donk Frank Coleman Narayanan Krishnakumar Hisham Arafat Richard Moore Ira Schild Ron Glick Jack Harwood Stephen Maloney Jim McGroddy Steve Todd


Jody Goncalves Suresh Thankappan Joe Dery Tom McGowan We also thank Ira Schild and Shane Goodrich for coordinating thisproject, Mallesh Gurram for the cover design, Chris Conroy and Rob Bradley for graphics, and the publisher, John Wiley and Sons, for timely support in bringing this book to the industry. Nancy Gessler Director, Education Services, EMC Corporation Alok Shrivastava Sr. Director, Education Services, EMC Corporation


Contents Introduction ................ . .. . .....• . •.. ... .... •..... .. .. . .. . .......... .. ... . ..................... •.•...... xvii Chapter 1 • Introduction to Big Data Analytics ................... . . . ....................... 1 1.1 Big Data Overview ..................... ....... .....•... • ...... . . . ........ • .. ... . . ... ....... ....... 2 1.1.1 Data Structures .. . .. . . . .. ................ ... ... . .. . ...... . .. .. .... . .................... ..... . .. . . . .. 5 1.1.2 Analyst Perspective on Data Repositories . ............................. . .......... .......•. ... ... .. .. 9 1.2 State of the Practice in Analytics ................................................................. . 11 1.2.1 Bl Versus Data Science .............. .... ....... . .. . ........... . . . .... . ....................... .. .... 12 1.2.2 Current Analytical Architecture... . .... .• . . ................ .... .............. .... .... ...... •.. . ..... 13 1.2.3 Drivers ofBig Data.................................................... . . . .. ................. .. ... . . 15 1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics.. ....... ...... . ............ .. ....... 16 1.3 Key Roles for the New Big Data Ecosystem....... ..... ......... . ....... . ..... .. .................... 19 1.4 Examples of Big Data Analytics ... .... .......... .... . ... ....... ... .... . ...... . .................... 22 Summary .............. ............ ... ... ......... .... • ... •....... ........ .. • ..•... . ................ 23 Exercises ..................... .... ..... .. ...... . ......•......... .. .. . ... .... . ..•.................... 23 Bibliography........................... .... .. ... ... ... •................... .. • ...... ..... ..... ....... 24 Chapter 2 • Data Analytics Lifecycle ..................................................... . 25 2.1 Data AnalyticsLifecycle Overview ... ..... . ............. • ...... •.. ..... ...... • ... •............. . . . 26 2.1.1 Key Roles for a Successful Anolytics Project .... . .. . .... .... . ........ . .. .. . ..•......... •. •....... . .. . .26 2.1.2 Background and Overview ofData Analytics Lifecyc/e.......................... . .......•... . ..... ...28 2.2 Phase 1: Discovery ..... .. .. .. . ............................. . ..•..................... •........... . 30 2.2.1 Learning the Business Domain .. . ....... ... ..•.•. •.... . .. ..... . . .. . ...................•........... .30 2.2.2 Resources . . ... . ................... . ...... . ......................... ..... ............. •.......•.... 31 2.2.3 Framing the Problem ............•.... . ...................................•......... •.•.... . . ...... 32 2.2.41dentifying Key Stakeholders ... .. ....................... ... . ... ......... .... . ....... •. . .......... . .33 2.2.51nterviewing the Analytics Sponsor...... ........ ...... .. .......... .... ... .. ... ..... .. ........... ... 33 2.2.6 Developing Initial Hypotheses................. .. . . . .. . . . .. . . . . ... .... .. ........... . . •............ . .35 2.2.71dentifying Potential Data Sources . ... ...•. •.. .... . . .. . ......•. •.......... . ....... . ..... . ... . .. .. . .35 2.3 Phase 2: Data Preparation ...........................................................•...•..•..... 36 2.3.1 Preparing the Analytic Sandbox ............... . ...................... ... •. •.......•.......... .. .... 37 2.3.2 Performing ETLT..................................................................•.•.......•... .. .38 2.3.3 Learning About the Data.. ..... . .............. .. ........................•.•.......•.•........ ..... .39 2.3.4 Data Conditioning....... .. ....•.......... . ....................... .. . .. . . . ......•. •............. .. .40 2.3.5 Survey and Visualize . . . ... .. .... .. .. ...... . . ..... .. . .................. . . •. ...... . .•.. .. .. .. . . . ..... 41 2.3.6 Common Toolsfor the Data Preparation Phase . . . .... .. ..... ....... . •......... •.• .•.. .. ..... .. .. . . .42 2.4 Phase 3: Model Planning ............................•................. . ... . .. •..... .....•........ 42 2.4.1 Data Exploration and Variable Selection . . ... . . .. . ......... •... . ... . . ........ . .............. .. .. . . . .44 2.4.2 Model Selection . ... ................ . .. . . . ................ •.......•...•.......................... . .45 2.4.3 Common Tools for the Model Planning Phase . ...........•....... . . •. ........................... . . . .45


CONTENTS 2.5 Phase 4: Model Building...... .................. ...... •. ... ..... .... • ... •. . •. .. •.........•...•.... 46 2.5.1Common Tools for theMode/Building Phase...... .. .. . ..... .. ..... . ....... . .. . . .. . . .. . .... . . .. . .... 48 2.6 Phase 5: Communicate Results ......... .... ...... . ... •........ ........ ... . •..... .....•. ..... •.... 49 2.7 Phase 6: Operationalize ... ... ....... ... . .. ........ ....... ... ........... •. . •. . ... ....... .......... SO 2.8 Case Study: Global Innovation Network and Analysis (GINA)................. •...................... 53 2.8.1 Phase 1: Discovery .................................................................................54 2.8.2 Phase 2: Data Preparation .... •........ . ...................................................... . ....55 2.8.3 Phase 3: Model Planning . . . ...•.•. . . .. . . ..... .. . . .. . ..... .. .. ... ...... . . . ................... . . . .. . .56 2.8.4 Phase 4: Mode/Building ..... . ....•.. .. .. .......... . .............. . . .. . ... . . ....... .. . .... ... . .. . . .56 2.8.5 Phase 5: CommunicateResults .. . . ..... . ...... .. ...... ... .. . .. . . ..................... ...... ........58 2.8.6 Phase 6: Operationalize. . ... ......•..... ..• .. . . . .. . . ..............•................................59 Summary ................................. • ................. •..•.. •.......•.....••........ . ....•.... 60 Exercises .................................•.... .. ..............•. . •....................... . . . . . •.... 61 Bibliography ....• . .••...................................•.... . . • ..... .. ............................. 61 Chapter 3 • Review of Basic Data Analytic Methods Using R. . . . . . .. . ... . .. .. . ... . . . . . .. ... . 63 3.1 Introd uction toR............................ ... .................................... ..... ......... 64 3.1.1 RGraphical User Interfaces . ............ . ............................... ...... . .. ... . . . ... ....... ...67 3.1.2 Data Import and Export. . ......... . .. ............. ........... ........... .................... .......69 3.1.3 Attribute and Data Types. .......... .. ...... . .......................................................71 3.1.4 Descriptive Statistics ....................... . . . .....................................................79 3.2 Exploratory Data Analysis .............. • ... . .• •.............•........... . .................... .... 80 3.2.1 Visualization Before Analysis ........ . ..................................................•...........82 3.2.2 Dirty Data............ .. ................................................ . ........... ...•...... .... .85 3.2.3 Visualizing a Single Variable ........ •.. . ................ .. .. . . ........... . .... ....... •.. . . . .... .. . .88 3.2.4 Examining Multiple Varia bles . .... .... ....• . .. . ... .......... .............. ...... . .. .. .............. 91 3.2.5 Data Exploration Versus Presentation ...... . ........ •. . . . .. . . ..... ...... ................... ...... ..99 3.3 Statistical Methods for Evaluation .................... . .. .• ......... ... . .. .................... . .. 101 3.3.1 Hypothesis Testing........ ........ .......... .... ............................ . .. . ...... .. ...... . ... 102 3.3.2 Difference ofMeans ...... . .... .. . .... ..... . ..................................................... 704 3.3.3 Wilcoxon Rank-Sum Test ................•........................ ... .. . ... . .................. •... 108 3.3.4 Type I and Type II Errors ... . ...... . .. . .................. . ........ . .. .... .. ......................... 109 3.3.5 Power andSample Size .....•.. . . .. . ... ...... . ........ ....... .............. ....... .. .... .......... 110 3.3.6 ANOVA................. . .. ......... . . .... .. . . ... .... ........ . . .. ..... . ... .. .. .... . •. •.......•... . 110 Summary ...... ............. • ....... ...... ....• .. •... • ............................... •......•...... 114 Exercises ...... ......... ......................... . ............... ...... . ... ... ....... •............. 114 Bibliography ................................... . . . ................. .................. •.... . . .. . .... 11 5 Chapter 4 • Advanced Analytical Theory and Methods: Clu stering .. . . .. . .. . ... . .. . . . ... . .. 117 4.1 Overview of Clustering ........ ...... ......... .. ................................................. 118 4.2 K-means ............... ....... ... ....................... .. ........ . ... . .......... . .... . .... .... 118 4.2.1 Use Cases..... .. ............. . •.....• ... ... .. ..... ........ .......... . . .. ........ ...... ... .. . ...... 119 4.2.2 Overview of the Method . ............ ....... ... . .. ........ ................... ... ... .. . .•. ..... . .. . 120 4.2.3 Determining the Number ofClusters. . . .. .. •. •...................... . .......... ..... .. ... ...... . ... 123 4.2.4 Diagnostics .. ......................... ...•.... ........... ..... ....................... .. .. ....... . 128


CONTENTS 4.2.5 Reasons to Choose and Cautions .. . .. . . . . . . .. . . . . . .. ... . ..... ... .. .. . . •. •. •. . ...•. • .•. ... . ..... ... 730 4.3 Add itional Algorithms .............. ... . . . . .. . ...... . ... . ........ .• .. .. ... ................ .. .... 134 Summary ......... .... ........................ .. . ....................... .. . ..•.. . .................. 135 Exercises ........... ..................... . ...... . ............................... . .......... .. ..... . 135 Bibliography ............................. ....... ................................ . .................. 136 Chapter 5 • Advanced Analytica l Theory and Methods: Association Ru les .................. 137 5.1 Overview .... . . ... ........................................ .. . .. . ..... . .. .................. .. .... 138 5.2 Apriori Algorithm........... . ............... .. ....... ... . . .... . . ..... .......... .. ......... ... ... 140 5.3 Evaluation of Candidate Rules ....................... . ... .. . .. ..... •....... . ................ ..... 141 5.4 Applications of Association Rules ............ ... ..... . ..... . . . ... ..... .. .. .. ....... .............. 143 5.5 An Example: Transactions in a Grocery Store... . .................... .... . . ... .......... ........... 143 5.5.1 The Groceries Dataset................... . . .. .............. •........... •... . .......•...............144 5.5.2 Frequent ltemset Generation . . ........................... .. ......... . . •. •......... •...............146 5.5.3 Rule Generation and Visualization ...... . ... . ......................... . .•. •.... .•. •........... . .. . 752 5.6 Validation and Testing ........... . ... .... . . ............................................. . ....... 157 5.7 Diagnostics.. .... ..................... . .. . . ..... . ............ . ... .. ... . ...... . ......... .. .... . . . 158 Summary....... .. ................ . ..... ... .. .. .. ...... .... .... . ........ .. .... ..... .............. . . 158 Exercises ................................ ... . . . ........ . ................. . .... ....... ......... . .... 159 Bibliography ................................ . .. .... ..... ............ ..... . ... ........... ... . ...... . 160 Chapter 6 • Advanced Analytical Theory and Methods: Regression .................. . ..... 161 6.1 Linear Regression.......... . .......... . .. . .. .. ...... . ............ .... .. . ....... ........... ...... 162 6.1.1 UseCases . . . ... . . . .. . ...... ..... ......................... .. . ....... .... .... .. ...... . .......... . .. . /62 6.1.2 Model Description .. ... .. . .. . ..... . ........... . .. . .. .... . . •. ..... . •.•.• . ...... . .•............. . .. .163 6.1.3 Diagnostics....................... . .... .. . . . . . . ....... •.•.• .....•. •.•...... .• . •.•.. . .. . .... . . . . . . . 773 6.2 Logistic Regression ............ ........ . ..... ................................ . ......... .. .. ... .. 178 6.2.1 Use Cases...... . ....................................... .... ................ .... ................... 179 6.2.2 Model Description ........ .. .... ... •..... . .... ........ .. .. •. ..... ... . .•. •...• .•................... 179 6.2.3 Diagnostics ................. ..... ...... . . .. ............•. •. ........•. ..... .• .•................... 181 6.3 Reasons to Choose and Cautions ....... . . .... .. .... ............ ........... ......... ....... ..... . 188 6.4 Additional Regression Models ............ ... .. ...... . ... . ............. . ... ........ ........... ... 189 Summary........... .... . ............ . ....... . .........•... . ...... . ...... ... ... . . ... .. ........... .. 190 Exercises ............ .. .......... .. . .. ................ .. .. .. ............ . . .. .......... . . . .. .. .... .. 190 Chapter 7 • Advanced Ana lytical Theory and Methods: Classification ...... . .......... . .... 191 7.1 Decision Trees ... .. ............... ...... ............ ............. .......... .............. ... .... 192 7.1.1 Overview ofa Decision Tree...... . .................... .. . ........................ .. .... ..... . ...... 193 7.1.2 The General Algorithm . .............. .............. ... ..•. ... .............. .• .. .. ........ .... . .. . . 197 7.1.3 Decision Tree Algorithms ............. .. . .... .. ......•. . .•.. ... •. •... .... . .... ... . .............. .. 203 7.1.4 Evaluating a Decision Tree............. . . •... . ... . ...•... .... . ....... . .................... . ... . . . . 204 7.1.5 Decision Trees in R . . . .. ................ ...... .. .. ..... ..... .... .................. . ..... ........ .. 206 7.2 Na'lve Bayes . .... ... ................ . ..... . ...... . .......... . .. . ... . ..... .. ..... ......... . ...... 211 7.2.1 Bayes' Theorem . . .. . ........................ . .....................................................212 7.2.2 Nai've Bayes Classifier ................... •... . ... ..... .......•.................................. .. .214


CONTENTS 7.2.3 Smoothing . ............... .................... . .. . ........ . .. . ...... .. •. .. .......... .. .......... .277 7.2.4 Diagnostics.. . ........... . ..................... .. .... . .•......... •.•.....•...•........ . . . .........217 7.2.5 Nai've Bayes in R............... . . .. . .....•... .. . ...•.•.........•.•.. .. . .. •. •.•.... ........ . .. .... .278 7.3 Diagnostics of Classifiers ............ •...... ........... •.......... ...•...• .. •... •. .... ........... 224 7.4 Additional Classification Methods.... • ... • ...... • ............. • .................•... .... ......... 228 Summary................. ..... ............ • ......•.............. .. ..........................•..... 229 Exercises .................. ... ......... .... .........................•.... . . . .................•..... 230 Bibliography ...... . ..........•......... .... ........... . ... . .............. ... ...•................... 231 Chapter 8 • Advanced Analytical Theory and Methods: Time Series Analysis . . .. ... . ... . .. . 233 8.1Overview of Time Series Analysis ....... ....... ................ ......................... .... ..... 234 8.1.1Box-Jenkins Methodology ................... . .. .... ...... . .................... . .. ..... ............235 8.2 ARIMA Model. ................ . .. . ....... •..•..... .. ...... . ... •................. • ... . ..•........ 236 8.2.1 Autocorrelation Function (ACF).. ......... ...................... ... ........ . ......... . .. ..... ..... 236 8.2.2 AutoregressiveModels. ...... ... ............ . . . .. •. ... ..... ... . .. ... ... . ......... . ....... .. . . .... 238 8.2.3 Moving Average Models. .. .. . .................................... .................... •..... . .... .239 8.2.4 ARMA and ARIMA Models ............. . .................................•...........•.....•.......241 8.2.5 Building and Evaluating an ARIMA Model ............................. . .•.........•. •. . ... •...... 244 8.2.6 Reasons to Choose and Cautions .. ................ . .. . ........ .. . . .. . ....... . .... .•.•. •.. . •. . .... .252 8.3 Additional Methods........ ... . ... ....... ... .. ...... ...... .. ....... ....... .. ... . .... . ... . ...... . 253 Summary ........................ ... ... ...... .. ............ • ......... ......... ..• .. .......• ... ..... 254 Exercises .............. . .......... ... ......... . •. .. .............................• .. . . .. • . .• ... ..... 254 Chapter 9 • Advanced Analytical Theory and Methods: Text Analysis ...... . ... . .. .. .. . . ... 255 9.1Text Analysis Steps .......... . .... ......... ...... ... .................... . ...... . ...... . . .•....... 257 9.2 AText Analysis Example..... •.... .... ............................ .. ............ ...... • .... ...... 259 9.3 Collecting Raw Text . . . . . . . . .. . . . . . . . . . . . . . . 26000 00 00 00 ••••• 00 ••• ••• ••• ••••• 00 ••••• 00 ••••• •• ••• 00 ••• 9.4 Representing Text .......................... ... .................. . ...................•.. ...... .. 264 9.5 Term Frequency-Inverse Document Freq uency (TFIDF) ...... • .......... • ..... .•. ...... . ......... 269 9.6 Categorizing Documents by Topics .... ................... .. .•..... . . ... • ...... •.. . . .. . . ......... 274 9.7 Determining Sentiments............... . ...... . ......•...•..•.... .. .. .. •.. •... •.. . . .. ........... 277 9.8 Gaining Insights ................ .. ....................... •..•....... .. ........•... . ..... . ....... 283 Summary ............... . ........... . ......... •.................... • ..... . . . ......... •..... . ....... 290 Exercises ...............•... . ..... . . .. ........ •..•... . ............. • ................. . ..... . ....... 290 Bibliography............ •. ..•... . ..... . ....... ... . ....... . .. . ................ . ............ . ........ 291 Chapter 10 • Advanced Analytics-Technology and Tools: MapReduce and Hadoop . . . ..... 295 10.1 Analytics for Unstructured Data . 29600 . . . . . . . . 00 ••••• 00. 00 ••• 00 00 . . . . . . . . . . 00 . . . . . . . . . 00 •• 00 . . . . . . . 10.1 .1 UseCasesoo .. 2960 0 . 0 0 00 • • • • • 0 0 . 0 0 00 • • • • • • 00 • • • • • • • 00 • • • 00 • • 0 0 . 0 0 . . . . . . . . . . . . . . . . . . . . . . . 00 . . . . . . . 0 0 . 10.1.2 MapReduce . .. .... ......... .. ............... . .......... •......... •.•....... •.•. •....... . ....... 298 70.7.3 Apache Hadoop ......... ... ........... . ......... . . .. ....... .. . . .. . .... ... . .• ...•.... .. . •....... 300 10.2 The Hadoop Ecosystem ....•... . ........... ..... ... . •... .. .............• . •. .. .. ....... . •• ...... 306 70.2.1 Pig . ....... ..... ........ . ......................................... . .. . . .......•... . ..... •.•..... 306 70.2.2 Hive ............... . ............•................ . ... •.•...........•.......•. . .. . .. . ..... . .. . .. 308 70.2.3 HBase. . . . . . .. 31700 . . . . . . . 00 • • • • • • • • • • • • • • • 00 • • • • • • 00 . . . . . . . . . . . . . . . . . . . 00 . . . . . . . 00 • • • 00 00 . . . 00 • • • • • 10.2.4 Mahou t . . 31900 • • • • • • • • • • • • • • • 00 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 00 • • • 00 . . . . . . . . •


CONTENTS 10.3 NoSOL ...............•........................ • ................. •..................... • ....... 322 Summary .............•...•........................•................. •.....................•....... 323 Exercises .................•........................ • ..................... •...... ................... 324 Bibliography....... •...... • ................. •...... • .................•......... .... ........ • ..•.... 324 Chapter 11 • Advanced Analytics-Technology and Tools: In-Database Analytics ........ . . . 327 11.1 SOL Essentials ............................................................. .. . . ........ • ..•.... 328 77.1.1Joins .. . .. . . .. .. . .. ... .. . ......... ... ............. . .. .. . ...... .. .. ... . ....... ... .... . .. ...... . .. .330 77.1.2 Set Operations................ . .. . ...................... . ...... ... ........................... . ...332 11.1.3 Grouping Extensions ......... .. .. .. . . . . .. ........................ ............. .. ................ .334 11.2 In-Database Text Analysis ............... •... . .............•......•......... . . .. ... .• . . . •..•.... 338 11 .3 Advanced SOL... .. ......................... •.. • .................•........... . .........•....... 343 71.3.1 Window Functions . . . . ............................... ... .. .... .. . . . ..... . ....................... 343 11.3.2 User-Defined Functions andAggregates ............................•. •. •............... .. ... .... .347 11.3.3 Ordered Aggregates ............. ..... .... ..... ....... .... .. .....................................351 11.3.4 MADiib...................................... ............•. ....... . . .... . .... •. •................ .352 Summary ..........•.. • ... • .......................................................... .. . . .......... 356 Exercises ......... . .............................. ........ ............................ .. . . .......... 356 Bibliography.......•...... •. .• • .................... • ... .. ........... . •. ..• . ......... .... .. . ........ 357 Chapter 12 • The Endgame, or Putting It All Together..................................... 359 12.1 Communicating and Operationalizing an Analytics Project. ........ . .....................•....... 360 12.2 Creating the Final Deliverables ......................... ..... . .. .. .. .•.......................... 362 12.2.1 Developing Core Material for Multiple Audiences........................ •..... .. •.•.............. 364 12.2.2 Project Goals . . . . . .. . . ............ ............. . ..... . ........ ..... . .. . . ..... . . . ................ 365 12.2.3 Main Findings ....... . ... . . ... . ....................... . .. ... . .. . ... ....• . . . ... . •. •........... . .. .367 12.2.4 Approach ... . .. . . .. . . ............................................................ .... .... ...... 369 12.2.5 Model Description ... . .. . .................................... .. ......... . .... . ...•..... . ..... ....371 12.2.6 Key Points Supported with Data. .......................... . . . . . ....... . . . ..... .. .. .. . ..... . ..... .372 12.2.7 Model Details .. . . .. .................................................. ....... •.•....... . ........ .372 12.2.8 Recommendations ........ ... .... ....... .... ........... .......... . .... . . ...... •.•.• .. .... ..... . .374 12.2.9 Additional Tips on Final Presentation ......... . .. . ............ .. . . . . .. . .. . ..... . •. •.............. .375 12.2.10 Providing Technica15pecificarions and Code................................... . ................ .376 12.3 Data Visua lization Basics .......... .... ... .... ....................•.......... . .... . ............. 377 12.3.1 Key Points Supported with Data ............... . ... . . . .................. . ............... ... ...... .378 12.3.2 Evolution ofa Graph................ ..... .... ............. ...... . ...... •.•... •. •.•......... •.... 380 12.3.3 Common Representation Methods .............. .. ............ .. . . . •. •.. . .... •. . ................ 386 12.3.4 How to Clean Up a Graphic ................... •. . . .... . ..... . .......... . . . ..... . ... .......... ... .387 12.3.5 Additional Considerations ..... ................. .... ... . ..... .. . . . . •.•. .. ... . •.• ...... . ...... ... .392 Summary ............ .. .........................•...... • ... • . ... .........•... •..................... 393 Exercises ........... . . .... . ................. .. .. . . . .... • ................. . . .. . .. • .......... . ....... 394 References and Further Reading ... .. ............ .... ...... ..... ......... . .... . . .................... 394 Bibliography.... . . ... ......... .... . ........................ • ................. .. . .. .. . ... . . ... ...... 394 Index .. . .............. . .. . .. . .. . . .. . ............ . . . .. . .. . . . ....... . . . ... . . .. . .. .. . .. . . . ... .. . . ............... .397


Foreword Technological advances and the associated changes in practical daily life have produced a rapidly expanding \"parallel universe\" of new content, new data, and new information sources all around us. Regardless of how one defines it, the phenomenon of Big Data is ever more present, ever more pervasive, and ever more important. There is enormous value potential in Big Data: innovative insights, improved understanding of problems, and countless opportunities to predict-and even to shape-the future. Data Science is the principal means to discover and tap that potential. Data Science provides ways to deal with and benefit from Big Data: to see patterns, to discover relationships, and to make sense of stunningly varied images and information. Not everyone has studied statistical analysis at a deep level. People with advanced degrees in applied math- ematics are not acommodity. Relatively few organizations have committed resources to large collections of data gathered primarily for the purpose of exploratory analysis. And yet, while applying the practices of Data Science to Big Data is a valuable differentiating strategy at present, it will be a standard core competency in the not so distant future. How does an organization operationalize quickly to take advantage of this trend? We've created this book for that exact purpose. EMC Education Services has been listening to the industry and organizations, observing the multi-faceted transformation of the technology landscape, and doing direct research in order to create curriculum and con- tent to help individuals and organizations transform themselves. For the domain of Data Science and Big Data Analytics, our educational strategy balances three things: people-especially in the context of data science teams, processes-such as the analytic lifecycle approach presented in this book, and tools and technologies-in this case with the emphasis on proven analytic tools. So let us help you capitalize on this new \"parallel universe\" that surrounds us. We invite you to learn about Data Science and Big Data Analytics through this book and hope it significantly accelerates your efforts in the transformational process.


Introduction Big Data is creating significant new opportunities for organizations to derive new value and create competitive advantage from their most valuable asset: information. For businesses, Big Data helps drive efficiency, quality, and personalized products and services, producing improved levels of customer satisfaction and profit. For scientific efforts, Big Data analytics enable new avenues of investigation with potentially richer results and deeper insights than previously available. In many cases, Big Data analytics integrate structured and unstructured data with real- time feeds and queries, opening new paths to innovation and insight. This book provides apractitioner's approach to some of the key techniques and tools used in Big Data analytics. Knowledge ofthese methods will help people become active contributors to Big Data analytics projects. The book's content is designed to assist multiple stakeholders: business and data analysts looking to add Big Data analytics skills to their portfolio; database professionals and managers of business intelligence, analytics, or Big Data groups looking to enrich their analytic skills; and college graduates investigating data science as a career field. The content is structured in twelve chapters. The first chapter introduces the reader to the domain of Big Data, the drivers for advanced analytics, and the role ofthe data scientist. The second chapter presents an analytic project lifecycle designed for the particular characteristics and challenges of hypothesis-driven analysis with Big Data. Chapter 3 examines fundamental statistical techniques in the context of the open source Ranalytic software environment. This chapter also highlights the importance of exploratory data analysis via visualizations and reviews the key notions of hypothesis development and testing. Chapters 4 through 9 discuss a range of advanced analytical methods, including clustering, classification, regression analysis, time series and text analysis. Chapters 10 and 11 focus on specific technologies and tools that support advanced analytics with Big Data. In particular, the MapReduce paradigm and its instantiation in the Hadoop ecosystem, as well as advanced topics in SOL and in-database text analytics form the focus of these chapters.


XVIII ! INTRODUCTION Chapter 12 provides guidance on operationalizing Big Data analytics projects. This chapter focuses on creat· ing the final deliverables, converting an analytics project to an ongoing asset of an organization's operation, and creating clear, useful visual outputs based on the data. EMC Academic Alliance University and college faculties are invited to join the Academic Alliance program to access unique \"open\" curriculum-based education on the following top ics: • Data Science and Big Data Analytics • Information Storage and Management • Cloud Infrastructure and Services • Backup Recovery Systems and Architecture The program provides faculty with course resourcesto prepare students for opportunities that exist in today's evolving IT industry at no cost. For more information,visit http: // education . EMC .com/ academicalliance. EMC Proven Professional Certification EMC Proven Professional is a leading education and certification program in the IT industry, providing compre- hensive coverage of information storage technologies, virtualization, cloud computing, data science/ Big Data analytics, and more. Being proven means investing in yourself and formally validating your expertise. This book prepares you for Data Science Associate (EMCDSA) certification. Visit http : I I educat i on . EMC . com for details.


INTRODUCTION TO BIG DATA ANALYTICS Much has been written about Big Data and the need for advanced analytics within industry, academia, and government. Availability of new data sources and the rise of more complex analytical opportunities have created a need to rethink existing data architectures to enable analytics that take advantage of Big Data. In addition, significant debate exists about what Big Data is and what kinds of skillsare required to make best use of it. This chapter explains several key concepts to clarify what is meant by Big Data, why advanced analytics are needed, how Data Science differs from Business Intelligence (BI), and what new roles are needed for the new Big Data ecosystem. 1.1 Big Data Overview Data is created constantly, and at an ever-increasing rate. Mobilephones, social media, imaging technologies to determine a medical diagnosis-all these and more create new data, and that must be stored somewhere for some purpose. Devices and sensorsautomatically generate diagnostic information that needs to be stored and processed in real time. Merely keeping up with this huge influx of data isdifficult, but su bstan- tially more cha llenging isanalyzing vast amounts of it, especially when it does not conform to traditional notions of data structure, to identify meaningful patterns and extract useful information. These challenges of the data deluge present the opportunity to transform business, government, science, and everyday life. Several industries have led the way in developing their ability to gather and exploit data: • Credit ca rd companies monitor every purchase their customersmake and can identify fraudulent purchases with ahigh degree of accuracy using rules derived by processing billions of transactions. • Mobile phonecompanies analyze subscribers' calling patterns to determine, for example, whether a caller'sfrequent contacts are on arival network. If that rival network is offering an attractive promo- tion that might cause thesubscriber to defect, the mobile phonecompany can proactively offer the subscriber an incentive to remain in her contract. • For companies such as LinkedIn and Facebook, data itself is their primary product. The valuationsof these compan ies are heavilyderived from the datathey gather and host, which contains moreand more intrinsic va lue as the datagrows. Three attributes stand out as defining Big Data characteristics: • Hugevolume of data: Rather than thousands or millions of rows, Big Data can be billions of rows and millions of columns. • Complexity of data t ypes and structures: Big Data reflectsthe variety of new data sources, formats, and structures, including digital traces being left on the web and other digital repositories for subse- quent analysis. • Speed of new dat acreation and growth: Big Data can describe high velocity data, with rapid data ingestion and near real time analysis. Although the vol ume of Big Data tends to attract the most attention, generally the variety and veloc- ity of the data provide a more apt definition of Big Data. (Big Data is sometimes described as having 3 Vs: volume, variety, and velocity.) Due to its size or structure, Big Data cannot beefficientlyanalyzed using only traditional databases or methods. Big Data problems req uire new tools and tech nologies to store, manage, and realize the business benefit. These new toolsand technologies enable creation, manipulation, and


1.1 Big Data Overview management of large datasets and the storage environments that house them. Another definition of Big Data comes from the McKi nsey Global report from 2011: Big Data is data whose scale, distribution, diversity, and/ or timeliness require the use of new technical architectures and analytics to enable insights that unlock ne w sources ofbusiness value. McKinsey & Co.; Big Data: The Next Frontier for Innovation, Competition, and Prod uctivity [1] McKinsey's definition of Big Data impl ies that organizationswill need new data architectures and ana- lytic sandboxes, new tools, new analytical methods, and an integration of multiple skillsinto the new ro le of the data scientist, which will be discussed in Section 1.3. Figure 1-1highlights several sources of the Big Data deluge. What's Driving Data Deluge? Mobile Social Video Video Sensors Media Su r ve i l l an ce Rendering Smart Geophysical •Medical Gen e Grids Expl orat ion Imaging Seque nci n g FtGURE 1-1 What 's driving the da ta deluge The rate of data creation is accelerating, driven by many of the items in Figure 1-1. Social media and genetic sequencing areamong the fastest-growing sources of Big Data and examples of untraditional sources of data being used for analysis. For example, in 2012 Facebook users posted 700 status updates per second worldwide, which can be leveraged to deduce latent interests or political views of users and show relevant ads. For instance, an update in wh ich a woman changes her relationship status from \"single\" to \"engaged\" wou ld trigger ads on bridal dresses, wedding planning, or name-changing services. Facebook can also construct social graphs to analyze which users are connected to each other as an interconnected network. In March 2013, Facebook released a new featu recalled \"Graph Search,\" enabling users and developers to search social graphs for people with similar interests, hobbies, and shared locations.


INTRODUCTION TO BIG DATA ANALYTICS Another example comes from genomics. Genetic sequencing and human genome mapping provide a detailed understanding of genetic makeup and lineage. The health care industry is looking toward these advances to help predict which illnesses a person is likely to get in his lifetime and take steps to avoid these maladies or reduce their impact through the use of personalized medicine and treatment. Such tests also highlight typical responses to different medications and pharmaceutical drugs, heightening risk awareness of specific drug treatments. While data hasgrown, the cost to perform this work has fallen dramatically. The cost to sequence one humangenome has fallen from $100 million in 2001 to $10,000 in 2011, and thecost continuesto drop. Now, websites such as 23andme (Figure 1-2) offer genotyping for less than $100. Although genotyping analyzes only afraction of a genome and does not provide as much granularity as genetic sequencing, it does point to the fact that data and complex analysis is becoming more prevalent and less expensive to deploy. 23 pairs of 20.5% : 38.6% chromosomes. One unique you. ( .t A! n · s, b·S 1h Jn Afr c.an Bring your ancestry to life. F1ncl out what percent or your DNA comes !rom populations around the world. rang1ng from East As1a Sub-Saharan Alllca Europe, and more. B1eak European ancestry down 1010 d1st1nct regions such as the Bnush Isles. Scnnd1navla Italy and Ashkenazi Jewish. People IVI\\h mixed ancestry. Alncan Amencans. Launos. and Nauve Amencans w111 also get a detailed breakdown. 24.7% Europe.,, Find relatives across Build your family tree • continents or across and enhance your the street. ex erience. 'Share your knowledge. Watch it row. FIGURE 1-2 Examples of what can be learned through genotyping, from 23andme.com


1.1 Big Dat a Overview As illustrated by the examples of social media and genetic sequencing, individualsand organizations both derive benefits from analysis of ever-larger and more comp lex datasets that require increasingly powerful analytical capabilities. 1.1.1 Data Structures Big data can come in multiple forms, including structured and non-structured data such as financial data, text files, multimedia files, and genetic mappings. Contrary to much of the traditional data analysis performed by organizations, most of the Big Data is unstructured or semi-structured in nature, which requires different techniques and tools to process and analyze. [2) Distributed computing environments and massively parallel processing (MPP) architectures that enable parallelized data ingest and analysis are the preferred approach to process such complex data. With this in mind, this section takes a closer look at data structures. Figure 1-3 shows four types of data structures, with 80-90% of future data growth coming from non- structured data types. [2) Though different, the four are commonly mixed. For example, aclassic Relational Database Management System (RDBMS) may store call logs for a software support call center. The RDBMS may store characteristics of the support calls as typical structured data, with attributes such as time stamps, machine type, problem type, and operating system. In addition, the system will likely have unstructured, quasi- or semi-structured data, such as free-form call log information taken from an e-mail ticket of the problem, customer chat history, or transcript of a phone call describing the technical problem and the solu- tion or audio file of the phone call conversation. Many insightscould be extracted from the unstructured, quasi- or semi-structured data in the call center data. Big Data Characteristics: Data Structures Data Growth Is Increasingly Unstructured I Structured '0 Q) E u 2 iii Q) 0 ~ FIGURE 1-3 Big Data Growth is increasingly unstructured


INTRODUCTION TO BIG DATA ANALYTICS Although analyzing structured data tends to be the most familiar technique, a different technique is required to meet the challenges to analyze semi-structured data (shown as XML), quasi-structured (shown as a clickstream), and unstructured data. Here are examples of how each of the four main types of data structures may look. o Structured data: Data containing adefined data type, format, and structure (that is, transaction data, online analytical processing [OLAP] data cubes, traditional RDBMS, CSV files, and even simple spread- sheets). See Figure 1-4. SUMMER FOOD SERVICE PROGRAM 11 Data as of August 01. 2011) Fiscal Number of Peak (July) Meals Total Federal Year Sites Participation Served Expenditures 2] 1969 ---Thousands-- -MiL- -Million$- 1970 0.3 1971 1.2 99 2.2 1.8 1972 8.2 1973 1.9 227 8.2 1974 21.9 1975 3.2 569 29.0 26.6 1976 33.6 TQ3] 6.5 1,080 73.5 50.3 1977 73.4 1978 11.2 1,437 65.4 88.9 1979 114.4 1980 10.6 1,403 63.6 100.3 1981 108.6 1982 12.0 1,785 84.3 110.1 1983 105.9 1984 16.0 2,453 104.8 87.1 1985 93.4 1986 22.4 3,455 198.0 96.2 1987 111.5 1988 23.7 2,791 170.4 114.7 1989 129.3 1990 22.4 2,333 120.3 133.3 143.8 23.0 2,126 121.8 1~11 21.6 1,922 108.2 20.6 1,726 90.3 14.4 1,397 68.2 14.9 1,401 71.3 15.1 1,422 73.8 16.0 1,462 77.2 16.1 1,509 77.1 16.9 1,560 79.9 17.2 1,577 80.3 18.5 1.652 86.0 19? 1 ~Q? 91? FIGURE 1-4 Example ofstructured data o Semi-structured data: Textual data files with adiscernible pattern that enables parsing (such as Extensible Markup Language [XML] data files that are self-describing and defined by an XML schema). See Figure 1-5. o Quasi-structured data: Textual data with erratic data formats that can be formatted with effort, tools, and time (for instance, web clickstream data that may contain inconsistencies in data values and formats). See Figure 1-6. o Unstructured data: Data that has no inherent structure, which may include text documents, PDFs, images, and video. See Figure 1-7.


1.1 Big Data Ove rview Quasi-structured data is a common phenomenon that bears closer scrutiny. Consider the following example. A user attends the EMC World conference and subsequently runsa Google search online to find information related to EMC and Data Science.This would produce aURL suchas https: I /www . googl e . c om/ #q=EMC+ data +scienc e and a list of results, such as in the first graphic of Figure 1-5. - ~ ....- . . Tool!un •• 0 QUKkt~b~ b:plorerbars Ctri•Q he Go to F5 Stop R<foosh F7 Zoom(IOO'Jil Fll Tcxtsa:e &>coding Sty!< C•rct brOWSing Source Stc:unt\\ frpclt lnt~ ~loONI 0. tt u re- Wdlpoge pnv.cy potoey_ P\"\"\"\"'JI>ond Ful scr~ o:.~t.a c!':a=-set.•\" ~t.t-e \" > So l~t.:.o~s < / t.:.t!e> <z:.~ca l':cc.p-eq-.:.:.v•\" X-:J;.-cc:r.;:a c.:.t :~ \" cc::te::c.•\" :.::·~d.Q\"e , c~.=cr:.\"!•: \"> <t.:.e:\"!> ~~C - :e ad.:. ~o Clc~d Co~~ e.:.~~, 3~Q' Dace., a ::d T:~sced ! ! clc::d cc::,r·..:e.:.::r; . \"> <l.:.::k =e:•\"se;·:es!':eee\" 1':=-et•\" / R. /a;;e;;t c;s / ccv.rrp\"\"' / jo;n:e· ~ ze: c':.\" > <l.:.::k =~:• \" St.i':es!:eet. \" !-:.=-et•\" / B1/a.s:t::;s t c,;u / 1ooorrapo g c / rra ·-. . C!!!! '\" > <l.:. ::Jc :-el\"\"\"!!t.)-'l es!'l.eec \" l':=e~•\" / 5~ /a.:.;ets / c .,, / corr:rtgjJ/ ..c!lcO\"\"'. ve:-,.cade:- c:~s \" > <l.:.::.k =e:• \" st.:,·:esl':ee:t.\" !':.:et• \" 15· / a;;ee, t c:z:Jisgrur:c... / -e:;n;o;gs· ve:-tco;c• c='a\"> <~c::.;::t. :.:,1=e•\" t.ex::. / : ·e:;asc::.pt.\" s:-c• '\" // c l a; t o ;n: P'' p;•\"\" ccrrt-.~.. dce:t:t.- ; - ><Isc:l.p:t.> < :~ c:.:.;::t. .!l:c •\"' / R. / a:.sec:J(<~.; / cgrr;;:c\"\" /rred•--.1z .. _2 I 6 I 2 .;;,;.\"'j;. ~ 3 \"' ></ ~c:.:.pt.> FIGURE 1-5 Example ofsemi-structured data After doing this search, the user may choose the second link, to read more about the headline \"Data Scientist- EM( Education, Training, and Certification.\" This brings the user to an erne . com site focused on thistopic and a new URL, h t t p s : I / e d ucation . e rne . com/ guest / campa i gn / data_ science


INTRODUCTION TO BIG DATA ANALYTICS . aspx, that displays the page shown as (2) in Figure 1-6. Arriving at this site, the user may decide to click to learn more about the process of becoming certified in data science. The user chooses a link toward the top of the page on Certifications, bringing the user to a new URL: ht tps : I I education. erne. com/ guest / certifica tion / framewo rk / stf / data_science . aspx, which is (3) in Figure 1-6. Visiting these three websites adds three URLs to the log files monitoring the user's computer or network use. These three URLs are: https: // www.google . com/# q=EMC+data+s cience https: // education . emc.com/ guest / campaign/ data science . aspx https : // education . emc . com/ guest / certification/ framework / stf / data_ science .aspx 1 - - - - _...... .._.. ... ....... -----·--A.-.----_-_-_-_-_·- oO.Uk*-andi'IO..~T~·OIC~ _--_________----··-·-------·----------·-,---_.---.-_._---_-·_--_~_·-·_-_·--_---_·_·-_--_-_·-.,_ - - - · - t...,._._·, - - . .... - -- - -- --·-------~· 0.. ldHIWI • DtC (Ot.aiiOI\\. l....,... and~ 0 -....-...-_.. _____ ._-....._~. -,~ · 1 .. -.....· ~----\"' ~_.,.. -·-·-~ :c~::..~. and Cenbbcrt 0 t-e •·,-'\"\"\"\"... .. .__~ --__-_-_-_-_-_-._____ -, ....- . '\"\"..._. ....,~O•Uik~R........, A0.1t-~~_,...h\",• £MC O.......'•..',-.,\"_\".'.•..•.....,....__ ... --...... - ···-.....-~--.-- .... ,...._~· https://www.google.com/#q __3 >l __ _ __ , ,_ __ ,_ ------- - ---- ---------------.-.---··-----.. -., :::.....~-:::.::.::·--===-=-== , :: ::=--.....::..-..=-.:.-.=- . . . . FiGURE 1-6 Example of EMC Data Science search results


1.1 Big Data Overview FIGURE 1-7 Example ofunstructured data: video about Antarctica expedition [3] This set of three URLs reflects the websites and actions taken to find Data Science information related to EMC. Together, this comprises a clicksrream that can be parsed and mined by data scientists to discover usage patterns and uncover relationships among clicks and areas of interest on a website or group of sites. The four data types described in this chapter are sometimes generalized into two groups: structured and unstructured data. Big Data describes new kinds of data with which most organizations may not be used to working. With thisin mind, the next section discusses common technology architectures from the standpoint of someone wanting to analyze Big Data. 1.1.2 Analyst Perspective on Data Repositories The introduction of spreadsheets enabled business users to create simple logic on data structured in rows and columns and create their own analyses of business problems. Database administrator training is not requ ired to create spreadsheets: They can be set up to do many things quickly and independently of information technology (IT) groups. Spreadsheets are easy to share, and end users have control over the logic involved. However, their proliferation can result in \"many versions of the truth.\" In other words, it can be challenging to determine if a particular user has the most relevant version of a spreadsheet, with the most current data and logic in it. Moreover, if a laptop islost or a file becomes corrupted, the data and logic within the spreadsheet could be lost. This is an ongoing challenge because spreadsheet programs such as Microsoft Excel still run on many computers worldwide. With the proliferation of data islands(or spread marts), the need to centralize the data is more pressing than ever. As data needs grew, so did more scalable data warehousing solutions. These technologies enabled data to be managed centrally, providing benefits of security, failover, and a single repository where users


INTRODUCTION TO BIG DATA ANALYTICS could rely on getting an \"official\" source of data for financial reporting or other mission-critical tasks. This structure also enabled the creation ofOLAPcubesand 81analytical tools, which provided quick access to a set of dimensions within an RD8MS. More advanced features enabled performance of in-depth analytical techniques such as regressions and neural networks. Enterprise Data Warehouses (EDWs) are critica l for reporting and 81tasks and solve many of the problems that proliferating spreadsheets introduce, such as which of multiple versions of aspreadsheet is correct. EDWs-and agood 81 strategy-provide direct data feeds from sources that are centrally managed, backed up, and secured. Despite the benefits of EDWs and 81, these systems tend to restrict the flexibility needed to perform robust or exploratory data analysis. With the EDW model, data is managed and controlled by IT groups and database administrators(D8As), and data analysts must depend on IT for access and changes to the data schemas. This imposes longer lead ti mes for analysts to get data;most of the time is spent waiting for approvalsrather than starting meaningful work. Additionally, many times the EDW rules restrict analysts from building datasets. Consequently, it is com mon for additional systems to emerge containing critical data for constructing analytic datasets, managed locally by power users. IT groups generally dislike exis- tence of data sources outside of their control because, unlike an EDW, these datasets are not managed, secured, or backed up. From an analyst perspective, EDW and 81 solve problems related to data accuracy and availability. However, EDW and 81 introduce new problems related to flexibility and agility, which were less pronounced when dealing with spreadsheets. Asolution to this problem is the analytic sandbox, which attempts to resolve the conflict for analysts and data scientists with EDW and more formally managed corporate data. In this model, the IT group may still manage the analytic sandboxes, but they will be purposefully designed to enable robust analytics, while being centrally managed and secured. These sandboxes, often referred to as workspaces, are designed to enable teams to explore many datasets in a controlled fashion and are not typically used for enterprise- level financial reporting and sales dashboards. Many times, analytic sandboxes enable high-performance computing using in-database processing- the analytics occur within the database itself. The idea is that performance of the analysis will be better if the analytics are run in the database itself, rather than bringing the data to an analytical tool that resides somewhere else. In-database analytics,discussed further in Chapter 11, \"Advanced Analytics- Technology and Tools: In-Database Analytics.\" creates relationships to multiple data sources within an organization and saves time spent creating these data feeds on an individual basis. In-database processing for deep analytics enables faster turnaround time for developing and executing new analytic models, while reducing, though not eliminating, the cost associated with data stored in local, \"shadow\" file systems. In addition, rather than the typical structured data in the EDW, analytic sandboxes ca n house a greater variety of data, such as raw data, textual data, and other kinds of unstructured data, without interfering with critical production databases. Table 1-1 summarizes the characteristics of the data repositories mentioned in this section. TABLE 1-1 Types of Data Repositories, from an Analyst Perspective Data Repository Characteristics Spreadsheets and low-volume databases for recordkeeping Spreadsheets and Analyst depends on data extracts. data marts (\"spreadmarts\")


1.2 State of the Practice in Analytics Data Warehouses Centralized data containers in a purpose-built space Supports Bl and reporting, but restricts robust analyses Analytic Sandbox Ana lyst d ependent on IT and DBAs for data access and schema changes (workspaces) Ana lysts must spend significant time to g et aggregated and disaggre- gated data extracts from multiple sources. Data assets gathered from multiple sources and technologies for ana lysis Enables flexible, high-performance ana lysis in a nonproduction environ- ment; can leverage in-d atabase processing Reduces costs and risks associated w ith data replication into \"shadow\" file sys t e m s \"Analyst owned\" rather than \"DBA owned\" There are several things to consider with Big Data Analytics projects to ensure the approach fits w ith the desired goals. Due to the characteristics of Big Data, these projects le nd themselves to decision su p- port for high-value, strategic decision making w ith high processing complexit y. The analytic techniques used in this context need to be iterative and flexible, due to the high volume of data and its complexity. Performing rapid and complex analysis requires high throughput network con nections and a consideration for the acceptable amount of late ncy. For instance, developing a real-time product recommender for a website imposes greater system demands than developing a near· real·time recommender, which may still provide acceptable p erform ance, have sl ightly greater latency, and may be cheaper to deploy. These considerations requi re a different approach to thinking about analytics challenges, which will be explored further in the next section. 1.2 State of the Practice in Analytics Current business problems provide many opportunities for organizations to become more analytical and data driven, as shown in Table 1·2. TABLE 1-2 Business Drivers for Advanced Analytics Business Driver Examples Optimize business operations Sales, pricing, profitability, efficiency Identify business risk Customer churn, fraud, default Predict new business opportunities Upsell, cross-sell, best new customer prospects Comply w ith laws or regu latory Anti-Money Laundering, Fa ir Lending, Basel II-III, Sarbanes- requirements Oxley(SOX)


INTRODUCTION TO BIG DATA ANALYTICS Table 1-2 outlines four categories of common business problems that organizations contend with where they have an opportunity to leverage advanced analytics to create competitive advantage. Rather than only performing standard reporting on these areas, organizations can apply advanced analytical techniques to optimize processes and derive more value from these common tasks. The first three examples do not represent new problems. Organizations have been trying to reduce customer churn, increase sales, and cross-sell customers for many years. What is new is the opportunity to fuse advanced analytical techniques with Big Data to produce more impactful analyses for these traditional problems. The last example por- trays emerging regulatory requirements. Many compliance and regulatory laws have been in existence for decades, but additional requirements are added every year, which represent additional complexity and data requirements for organizations. Laws related to anti-money laundering (AML) and fraud prevention require advanced analytical techniques to comply with and manage properly. 1.2.1 81 Versus Data Science The four business drivers shown in Table 1-2 require avariety of analytical techniques to address them prop- erly. Although much is written generally about analytics, it is important to distinguish between Bland Data Science. As shown in Figure 1-8, there are several ways to compare these groups of analytical techniques. One way to evaluate the type of analysis being performed is to examine the time horizon and the kind of analytical approaches being used. Bl tends to provide reports, dashboards, and queries on business questions for the current period or in the past. Bl systems make it easy to answer questions related to quarter-to-date revenue, progress toward quarterly targets, and understand how much of agiven product was sold in a prior quarter or year. These questions tend to be closed-ended and explain current or past behavior, typically by aggregating historical data and grouping it in some way. 81 provides hindsight and some insight and generally answers questions related to \"when\" and \"where\" events occurred. By comparison, Data Science tends to use disaggregated data in amore forward-looking, exploratory way, focusing on analyzing the present and enabling informed decisions about the future. Rather than aggregating historical data to look at how many of a given product sold in the previous quarter, a team may employ Data Science techniques such as time series analysis, further discussed in Chapter 8, \"Advanced Analytical Theory and Methods: Time Series Analysis,\" to forecast future product sales and revenue more accurately than extending a simple trend line. In addition, Data Science tends to be more exploratory in nature and may use scenario optimization to deal with more open-ended questions. This approach provides insight into current activity and foresight into future events, while generally focusing on questions related to \"how\" and \"why\" events occur. Where 81 problems tend to require highly structured data organized in rows and columns for accurate reporting, Data Science projects tend to use many types of data sources, including large or unconventional datasets. Depending on an organization's goals, it may choose to embark on a81 project ifit is doing reporting, creating dashboards, or performing simple visualizations, or it may choose Data Science projects if it needs to do a more sophisticated analysis with disaggregated or varied datasets.


1.2 State ofthe Practice In Analytlcs Exploratory Predictive Analytics and Data Mining (Data Science) Typical • Optimization. predictive modolin£ Techniques forocastlnC. statlatlcal analysis and • Structured/unstructured data. many Data Types types of sources, very Ioree datasata Common • What II...? Questions • What's tho optlmaltconarlo tor our bualnoss? • What wtll happen next? What II these trend$ continuo? Why Is this happonlnt? I Business Intelligence Analytical ,. -- ---, Typical • Standard and ad hoc reportlnc. dashboards. Approach Techniques alerts, queries, details on demand and • Structured data. traditional sourcoa. Data Types manac:eable datasets 1 Business 1 Common • What happened lut quarter? Questions • How many units sold? • Whore Is the problem? In whic h situations? _____ ,1 \\ Inte llige nce.., 1 .... Explanatory Past Tim e Future fiGUR E 1·8 Comparing 81 with Data Science 1.2.2 Current Analytical Architecture As described earlier, Data Science projects need workspaces that are purpose-built for experimenting with data, with flexible and agile dataarchitectures. Most organizations still have data warehouses that provide excellent support for traditional reporting and simple data analysis activities but unfortunately have amore difficult time supporting more robust analyses. This section examines a typical analytical data architecture that may exist within an organization. Figure 1-9 shows a typical data architecture and several of the challenges it presentsto data scientists and others trying to do advanced analytics. This section examines the data flow to the Data Scientist and how this individual tits into the process of getting data to analyze on proj ects.


INTRODUCTION TO BIG DATA ANALYTICS i,._.,l Dashboards Reports It Al e r t s An alysts FIGURE 1-9 Typical analytic architecture 1. For data sources to be loaded into the data wa rehouse, data needs to be well understood, structured, and normalized with the appropriate data type definitions. Although th iskind of centralization enabl es security, backup, and fai lover of highly critical data, it also means that data typically must go through significant preprocessing and checkpoints before it can enter this sort of controlled environment, which does not lend itself to data exploration and iterative analytics. 2. As a result of this level of control on the EDW, additional local systems may emerge in the form of departmental wa rehouses and loca l data marts that business users create to accommodate their need for flexible analysis. These local data marts may not have the same constraints for secu- rity and structu re as the main EDW and allow users to do some level of more in-depth analysis. However, these one-off systems reside in isolation, often are not synchronized or integrated with other data stores, and may not be backed up. 3. Once in the data warehouse, data is read by additional applications across the enterprise for Bl and reporting purposes. These are high-priority operational processes getting critical data feeds from the data warehouses and repositories. 4. At the end of this workflow, analysts get data provisioned for their downstream ana lytics. Because users generally are not allowed to run custom or intensive analytics on production databases, analysts create data extracts from the EDW to analyze data offline in Ror other local analytical tools. Many times these tools are limited to in-memory analytics on desktops analyz- ing sa mples of data, rath er than the entire population of a dataset. Because these analyses are based on data extracts, they reside in a separate location, and the results of the analysis-and any insights on the quality of the data or anomalies-rarely are fed back into the main data repository. Because new data sources slowly accum ulate in the EDW due to the rigorous validation and data struct uring process, data is slow to move into the EDW, and the data schema is slow to change.


1.2 State of the Practice in Analytics Departmental data warehouses may have been originally designed for aspecific purpose and set of business needs, but over time evolved to house more and more data, some of which may be forced into existing schemas to enable Bland the creation ofOLAP cubes for analysis and reporting. Although the EDW achieves the objective of reporting and sometimes the creation of dashboards, EDWs generally limit the ability of analysts to iterate on the data in aseparate nonproduction environment where they can conduct in-depth analytics or perform analysis on unstructured data. The typical data architectures just described are designed for storing and processing mission-critical data, supporting enterprise applications, and enabling corporate reporting activities. Although reports and dashboards are still important for organizations, most traditional data architectures inhibit data exploration and more sophisticated analysis. Moreover, traditional data architectures have several additional implica- tions for data scientists. o High-value data is hard to reach and leverage, and predictive analytics and data mining activities are last in line for data. Because the EDWs are designed for central data management and reporting, those wanting data for analysis are generally prioritized after operational processes. o Data moves in batches from EDW to local analytical tools. This workflow means that data scientists are limited to performing in-memory analytics (such as with R, SAS, SPSS, or Excel), which will restrict the size ofthe datasets they can use. As such, analysis may be subject to constraints of sampling, which can skew model accuracy. o Data Science projects will remain isolated and ad hoc, rather than centrally managed. The implica- tion ofthis isolation is that the organization can never harness the power of advanced analytics in a scalable way, and Data Science projects will exist as nonstandard initiatives, which are frequently not aligned with corporate business goals or strategy. All these symptoms of the traditional data architecture result in a slow \"time-to-insight\" and lower business impact than could be achieved ifthe data were more readily accessible and supported by an envi- ronment that promoted advanced analytics. As stated earlier, one solution to this problem is to introduce analytic sandboxes to enable data scientists to perform advanced analytics in acontrolled and sanctioned way. Meanwhile, the current Data Warehousing solutions continue offering reporting and Bl services to support management and mission-critical operations. 1.2.3 Drivers of Big Data To better understand the market drivers related to Big Data, it is helpful to first understand some past history of data stores and the kinds of repositories and tools to manage these data stores. As shown in Figure 1-10, in the 1990s the volume of information was often measured in terabytes. Most organizations analyzed structured data in rows and columns and used relational databases and data warehouses to manage large stores of enterprise information. The following decade saw a proliferation of different kinds of data sources-mainly productivity and publishing tools such as content management repositories and networked attached storage systems-to manage this kind of information, and the data began to increase in size and started to be measured at petabyte scales. In the 2010s, the information that organizations try to manage has broadened to include many other kinds of data. In this era, everyone and everything is leaving a digital footprint. Figure 1-10 shows a summary perspective on sources of Big Data generated by new applications and the scale and growth rate of the data. These applications, which generate data volumes that can be measured in exabyte scale, provide opportunities for new analytics and driving new value for organizations. The data now comes from multiple sources, such as these:


INTRODUCTION TO BIG DATA ANALYTICS • Medical information, such as genomic sequencing and diagnostic imaging • Photos and video footage uploaded to the World Wide Web • Video surveillance, such as the thousands of video ca meras spread across acity • Mobiledevices, which provide geospatiallocation dataof the users, as well as metadataabout text messages, phonecalls, and application usage on smart phones • Smart devices, which provide sensor-based collection of information from smart electric grids, smart buildings, and many other public and industry infrastructures • Nontraditional IT devices, including the use of radio-freq uency identification (RFID) readers, GPS navigation systems, and seismicprocessing MEASURED IN MEASURED IN WILL BE MEASURED IN TERABYTES PETABYTES EXABYTES lTB • 1.000GB lPB • l .OOOTB lEB l .OOOPB IIEII You(D .......~ ·, A I~ \\ n '' ~ b~ SMS '-----\" ORACLE =w: ~ 1.9905 20005 201.05 (RDBMS & DATA (CONTENT & DIGITAL ASSET (NO-SQL & KEY VALUE) WAREHOUSE) MANAGEMENT) FIGURE 1-10 Data evolution and the rise ofBig Data sources Th e Big Data trend is generating an enorm ous amount of information from many new sources. This data deluge requiresadvanced analytics and new market players to takeadvantage of these opportunities and new market dynamics, which wi ll be discussed in the following section. 1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics Organizations and data collectors are realizing that the data they ca n gath er from individuals contains intrinsic value and, as a result, a new economy is emerging. As this new digital economy continues to


1.2 State of the Practice in Analytics evol ve, the market sees the introduction of data vendors and data cleaners that use crowdsourcing (such as Mechanical Turk and Ga laxyZoo) to test the outcomes of machine learning techniques. Other vendors offer added va lue by repackaging open source tools in a simpler way and bringing the tools to market. Vendors such as Cloudera, Hortonworks, and Pivotal have provid ed thi svalue-add for the open source framework Hadoop. As the new ecosystem takes shape, there are four main groups of playe rs within this interconnected web. These are shown in Figure 1-11. • Data devices [shown in the (1) section of Figure 1-1 1] and the \"Sensornet\" gat her data from multiple locationsand continuously generate new data about th is data. For each gigabyte of new data cre- ated, an additional petabyte of data iscreated about that data. [2) • For example, consider someone playing an online video game through a PC, game console, or smartphone. In this case, the video game provider captures data about the skill and levels attained by the player. Intelligent systems monitor and log how and when the user plays the game. As a consequence, the game provider can fine-tune the difficulty of the game, suggest other related games that would most likely interest the user, and offer add itional equipment and enhancements for the character based on the user's age, gender, and interests. Th is information may get stored loca lly or uploaded to the game provider's cloud to analyze t he gaming habits and opportunities for upsell and cross-sell, and identify archetypical profiles of specific kinds of users. • Smartphones provide another rich source of data. In addition to messag ing and basic phone usage, they store and transmit data about Internet usage, SMS usage, and real-time location. This metadata can be used for analyzing traffic patterns by sca nning the density of smart- phones in locations to track the speed of cars or the relative traffic congestion on busy roads. In t his way, GPS devices in ca rs can give drivers real-time updates an d offer alternative routes to avoid traffic delays. • Retail shopping loyalty cards record not just the amo unt an individual spends, but the loca- tionsof stores that person visits, the kind sof products purchased, the stores where goods are purchased most often, and the combinations of prod ucts purchased together. Collecting this data provides insights into shopping and travel habits and the likelihood of successful advertisement targeting for certa in types of retail promotions. • Data collectors [the blue ovals, identified as(2) within Figure 1-11] incl ude sa mple entities that col lect data from the device and users. • Data resul ts from a cable TV provider tracking the shows a person watches, which TV channels someone wi ll and will not pay for to watch on demand, and t he prices someone is will ing to pay fo r premium TV content • Retail stores tracking the path a customer takes through their store while pushing a shop- ping cart with an RFID chip so they can gauge which products get the most foot traffic using geospatial data co llected from t he RFID chips • Data aggregators (thedark gray ovalsin Figure 1-11, marked as (3)) make sense of the data co llected from the various entities from the \"SensorN et\" or the \"Internet ofThings.\" These organizatio ns compiledata from the devices an d usage patternscollected by government agencies, retail stores,


INTRODUCTION TO BIG DATA ANALYTICS and websites. ln turn, they can choose to transform and package the data as productsto sell to list brokers, who may want to generate marketing lists of people who may be good targets for specific ad campaigns. • Data users and buyers are denoted by (4) in Figure 1-11. These groups directly benefit from thedata collected and aggregated by others within the datavalue chain. • Retai l banks, acting as a data buyer, may want to know which customers have the highest likelihood to apply for a second mortgage or a home equity line of credit. To provide inpu t for thisanalysis, retai l banks may purchase data from a data aggregator.This kind of data may include demographic information about people living in specific locations; people who appear to have a specific level of debt, yet still have solid credit scores (or other characteris- tics such as paying billson time and having savingsaccounts) that can be used to infer credit worthiness; and those who are searching the web for information about paying off debtsor doing home remodeling projects. Obtaining data from these various sources and aggrega- tors will enable a more targeted marketing campaign, which would have been more chal- lenging before Big Data due to the lack of information or high-performing technologies. • Using technologies such as Hadoop to perform natural language processing on unstructured, textual data from social media websites, users can gauge the reaction to events such as presidential campaigns. People may, for example, want to determine public sentiments toward a candidate by analyzing related blogs and online comments. Similarly, data users may want to track and prepare for natural disastersby identifying which areas a hurricane affects fi rst and how it moves, based on which geographic areas are tweeting about it or discussing it via social media. r:t\\ Data ~ .~ \\.::J Devices {'[I t Ptto...r r.r..., l UC)(.K VlOLU l !\\Iill UO\\. AI'' CfitUII CAfW CO\\tPl!UR 'IIff [) \\llOfO MfOICAI (,.\\MI RfAO(H If,. [Ill AN [ IMo\\C'oi\"G Law EniCHCefllefll Data Dolive!)' Privato Users/ Buyers SoMea Investigators 0 / lawyors Media FIGURE 1-11 Emerging Big Data ecosystem


1.3 Key Roles for the New Big Data Ecosyst e m As illustrated by this emerging Big Data ecosystem, the kinds of data and the related market dynamics vary greatly. These datasets ca n include sensor data, text, structured datasets, and social med ia.With this in mind, it is worth recalling that these datasets will not work wel l within traditional EDWs, which were architected to streamline reporting and dashboards and be centrally managed.lnstead, Big Data problems and projects require different approaches to succeed. Analysts need to partner with IT and DBAs to get the data they need within an analytic sandbox. A typical analytical sandbox contains raw data, aggregated data, and data with multiple kinds of structure. The sandbox enablesrobust exploration of data and requires asavvy user to leverage and take advantage of data in the sandbox environment. 1.3 Key Roles for the New Big Data Ecosystem As explained in the context of the Big Data ecosystem in Section 1.2.4, new players have emerged to curate, store, produce, clean, and transact data. In addition, the need for applying more advanced analytical tech- niques to increasinglycomplex business problems has driven the emergence of new roles, new technology platforms, and new analytical methods.This section explores the new roles that address these needs, and subsequent chapters explore some of the analytical methods and technology platforms. The Big Data ecosystem demands three categories of roles, as shown in Figure 1-12. These roles were described in the McKinsey Global study on Big Data, from May 2011 [1]. Three Key Roles of The New Data Ecosystem Role Deep Analytical Talent Data Scientists . . Projected U.S. talent gap: 1.40 ,000 to 1.90,000 Data Savvy Professionals . . Projected U.S. talent gap: 1..5 million Technology and Data Enablers Note: RcuresaboYe m~ • projected talent CDP In US In 201.8. as ihown In McKinsey May 2011 article \"81& Data: l he Nut rront* tot Innovation. Competition. and Product~ FIGURE 1-12 Key roles of the new Big Data ecosystem The first group- Deep Analytical Talent- is technically savvy, with strong analytical skills. Members pos- sess a combi nation of skills to handle raw, unstructured data and to apply complexanalytical techniques at


INTRODUCTION TO BIG DATA ANALYTICS massive scales. This group has advanced training in quantitative disciplines, such as mathematics, statistics, and machine learning. To do their jobs, members need access to a robust analytic sandbox or workspace where they can perform large-scale analytical data experiments. Examples of current professions fitting into this group include statisticians, economists, mathematicians, and the new role of the Data Scientist. The McKinsey study forecasts that by the year 2018, the United States will have atalent gap of 140,000- 190,000 people with deep analytical talent. This does not represent the number of people needed with deep analytical talent; rather, this range represents the difference between what will be available in the workforce compared with what will be needed. In addition, these estimates only reflect forecasted talent shortages in the United States; the number would be much larger on a global basis. The second group-Data Savvy Professionals-has less technical depth but has a basic knowledge of statistics or machine learning and can define key questions that can be answered using advanced analytics. These people tend to have a base knowledge of working with data, or an appreciation for some of the work being performed by data scientists and others with deep analytical talent. Examples of data savvy profes- sionals include financial analysts, market research analysts, life scientists, operations managers, and business and functional managers. The McKinsey study forecasts the projected U.S. talent gap for this group to be 1.5 million people by the year 2018. At a high level, this means for every Data Scientist profile needed, the gap will be ten times as large for Data Savvy Professionals. Moving toward becoming a data savvy professional is a critical step in broadening the perspective of managers, directors, and leaders, as this provides an idea of the kinds of questions that can be solved with data. The third category of people mentioned in the study is Technology and Data Enablers. This group represents people providing technical expertise to support analytical projects, such as provisioning and administrating analytical sandboxes, and managing large-scale data architectures that enable widespread analytics within companies and other organizations. This role requires skills related to computer engineering, programming, and database administration. These three groups must work together closely to solve complex Big Data challenges. Most organizations are familiar with people in the latter two groups mentioned, but the first group, Deep Analytical Talent, tends to be the newest role for most and the least understood. For simplicity, this discussion focuses on the emerging role of the Data Scientist. It describes the kinds of activities that role performs and provides a more detailed view of the skills needed to fulfill that role. There are three recurring sets of activities that data scientists perform: o Reframe business challenges as analytics challenges. Specifically, this is askill to diagnose busi- ness problems, consider the core of agiven problem, and determine which kinds of candidate analyt- ical methods can be applied to solve it. This concept is explored further in Chapter 2, \"Data Analytics lifecycle.\" o Design, implement, and deploy statistical models and data mining techniques on Big Data. This set of activities is mainly what people think about when they consider the role of the Data Scientist:


1.3 Key Roles for the New Big Data Ecosystem namely, applying complex or advanced analytical methods to avariety of busi ness problemsusing data. Chapter 3 through Chapter 11 of thisbook introd uces the reader to many of the most popular analytical techniques and tools in thisarea. • Develop insights that lead to actionable recommendations. It iscritical to note that applying advanced methods to data problemsdoes not necessarily drive new business va lue. Instead, it is important to learn how to draw insightsout of the data and communicate them effectively. Chapter 12, \"The Endgame, or Putting It AllTogether;' has a briefoverview of techniques for doing this. Data scientistsare generally thought of as having five main sets of skillsand behaviora l characteristics, as shown in Figure 1-13: • Quantitative skill: such as mathematics or statistics • Technical aptitude: namely, software engineering, machine learning, and programming skills • Skeptical mind-set and critical thin king: It is important that data scientistscan examine their work critica lly rather than ina one-sided way. • Curious and creative: Data scientists are passionate about data and finding creative ways to solve problemsand portray information. • Communicative and collaborative: Data scie ntists must be able to articulate the business val ue in a clear way and collaboratively work with other groups, including project sponsors and key stakeholders. Quantitative Technical Curious and Creative Skeptical Communlcativr and CDDaborati~ fiGURE 1 Profile of a Data Scientist


INTRODUCTION TO BIG DATA ANALYTICS Data scientists are generally comfortable using this blend of skills to acquire, manage, analyze, and visualize data and tell compelling stories about it. The next section includes examples of what Data Science teams have created to drive new value or innovation with Big Data. 1.4 Examples of Big Data Analytics After describing the emerging Big Data ecosystem and new roles needed to support its growth, this section provides three examples of Big Data Analytics in different areas: retail, IT infrastructure, and social media. Asmentioned earlier, Big Data presents many opportunities to improve sales and marketing analytics. An example of this is the U.S. retailer Target. Charles Duhigg's book The Power ofHabit [4] discusses how Target used Big Data and advanced analytical methods to drive new revenue. After analyzing consumer- purchasing behavior, Target's statisticians determined that the retailer made a great deal of money from three main life-event situations. • Marriage,when people tend to buy many new products • Divorce, when people buy new products and change their spending habits • Pregnancy, when people have many new things to buy and havean urgency to buy them Target determined that the most lucrative of these life-events is the thi rd situation: pregnancy. Using data collected from shoppers, Ta rget was able to identify this fact and predict which of its shoppers were pregnant. In one case, Target knew a female shopper was pregnant even before her family knew [5]. This kind of knowledge allowed Target to offer specificcoupons and incentives to thei r pregnant shoppers. In fact, Target could not only determine ifa shopper was pregnant, but in which month of pregnancy ashop- per may be. This enabled Target to manage its inventory, knowi ng that there would be demand for specific products and it wou ld likely vary by month over the coming nine- to ten-month cycles. Hadoop [6] represents another example of Big Data innovation on the ITinfrastructure. Apache Hadoop isan open source framework that allows companies to process vast amountsofinformation in ahighly paral- lelized way. Hadoop represents aspecific implementation of the MapReduce paradigm and was designed by Doug Cutting and Mike Cafarella in 2005 to use data with varying structu res. It is an ideal technical framework for many Big Data projects, which rely on large or unwieldy dataset swith unconventional data structures. One of the main benefits of Hadoop is that it employs adistributed file system, meaning it can use a distributed cluster of serversand commodity hardware to processlarg e amounts of data. Some of the most co mmon examples of Hadoop implementations are in the social med ia space, where Hadoop ca n manage transactions, give textual updates, and develop social graphsamong millions of users.Twitter and Facebook generate massive amounts of unstructured data and use Hadoop and its ecosystem of tools to manage this high volu me. Hadoop and itsecosystem arecovered in Chapter 10, \"Adva nced Ana lytics- Technology and Tools: MapReduce and Hadoop.\" Finally, social media represents a tremendous opportunity to leverage social and professional interac- tions to derive new insights. LinkedIn exemplifies a company in which data itself is the product. Early on, Linkedln founder Reid Hoffman saw the opportunity to create a social network for working professionals.


Exercises As of 2014, Linkedln has more than 250 million user accounts and has added many additional features and data-related products, such as recruiting, job seeker tools, advertising, and lnMaps, whic h show a social graph of a user's professional network. Figure 1-14 is an example of an InMap visualization that enables a LinkedIn user to get a broader view of the interconnectedness of hiscontacts and understand how he knows most of them. fiGURE 1-14 Data visualization ofa user's social network using lnMaps Summary Big Data comes from myriad sources, including social media, sensors, the Internet ofThings, video surveil- lance, and many sources of data that may not have been considered data even afew yearsago. As businesses struggle to keep up with changing market requirements, some companies are finding creative ways to apply Big Data to their growing business needs and increasing ly complex problems. As organizations evolve their processes and see the opportunities that Big Data can provide, they try to move beyond traditional Bl activities, such as using data to populate reports and dashboards, and move toward Data Science- driven projects that attempt to answer more open-ended and complex questions. However, exploiting the opportunities that Big Data presents requiresnew data architectures, includ- ing analytic sandboxes, new ways of working, and people with new skill sets. These driversare causing organizations to set up analytic sandboxes and build Data Science teams.Although some organizations are fortunate to have data scientists, most are not, because there is a growing talent gap that makes finding and hiring datascientists in atimely manner difficult. Still, organizations such as those in web retail, health care, genomics, new ITinfrastructures, and social media are beginning to take advantage of Big Data and apply it in creative and novel ways. Exercises 1. What are the three characteristicsof Big Data, and what are the main considerations in processing Big Data? 2 . What isan analytic sandbox, and why is it important? 3. Explain the differences between Bland Data Science. 4 . Describe the challenges of the current analytical architecture for data scientists. 5. What are the key skill sets and behavioral characteristics of a data scientist?


INTRODUCTION TO BIG DATA ANALYTICS Bibliography [1] C. B. B. D. Manyika, \"Big Data: The Next Frontier for Innovation, Competition, and Productivity,\" McKinsey Global Institute, 2011 . [2] D. R. John Gantz, \"The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East,\" IDC, 2013. [3] http: I l www. willisresilience . coml emc-datal ab [Online]. [4] C. Duhigg, The Power ofHabit: Why We Do What We Do in Life and Business, New York: Random House, 2012. [5] K. Hill, \"How Target Figured Out aTeen Girl Was Pregnant Before Her Father Did,\" Forbes, February 2012. [6] http: I l hadoop. apache . org [Online].


DATA ANALYTICSLIFECYCLE Data science projects differ from most traditional Business Intelligence projects and many data analysis projects in that data science projects are more exploratory in nature. For this reason, it is critical to have a process to govern them and ensure that the participants are thorough and rigorous in their approach, yet not so rigid that the process impedes exploration. Many problems that appear huge and daunting at first can be broken down into smaller pieces or actionable phases that can be more easily addressed. Having a good process ensures a comprehensive and repeatable method for conducting analysis. In addition, it helps focus time and energy early in the process to get a clear grasp of the business problem to be solved. A common mistake made in data science projects isrushing into data collection and analysis, wh ich precludes spending sufficient time to plan and scope the amount ofwork involved, understanding requ ire- ments, or even framing the business problem properly. Consequently, participants may discover mid-stream that the project sponsors are actually trying to achieve anobjective that may not match the available data, or they are attempting to address an interest that differs from what has been explicitly communicated. When this happens, the project may need to revert to the initial phases of the process for a proper discovery phase, or the project may be canceled. Creating and documenting a process helps demonstrate rigor, which provides additional credibility to the project when the data science team shares its findings. A well-defined process also offers a com- mon framework for others to adopt, so the methods and analysis can be repeated in the future or as new members join a team. 2.1 Data Analytics Lifecycle Overview The Data Analytics Lifecycle is designed specifically for Big Data problems and data science projects. The lifecycle has six phases, and project work can occur in several phases at once. For most phases in the life- cycle, the movement can be either forward or backward. This iterativedepiction of the lifecycle is intended to more closely portray a real project, in which aspectsof the project move forward and may return to earlier stages as new information is uncovered and team members learn more about various stages of the project. This enables participants to move iteratively through the process and drive toward operational- izing the project work. 2.1.1 Key Roles for a Successful Analytics Project In recent years, substantial attention has been placed on the emerging role of the data scientist. In October 2012, Harvard Business Review featured an article titled \"Data Scientist: The Sexiest Job of the 21st Century\" [1], in which experts OJ Patil and Tom Davenport described the new role and how to find and hire data scientists. More and more conferences are held annually focusing on innovation in the areas of Data Science and topics dealing with Big Data. Despite this strong focus on the emerging role of the data scientist specifi- cally, there are actually seven key roles that need to be fulfilled for a high-functioning data science team to execute analytic projects successfully. Figure2-1 depicts thevarious roles and key stakeholders of an analytics project. Each plays acritical part in asuccessful analyticsproject. Although seven roles are listed, fewer or more people can accomplish the work depending on the scope of the project, the organizational structure, and the skills of the participants. For example, on a small,versatile team, these seven roles may be fulfilled by only 3 people, but avery large project may require 20 or more people. The seven roles follow.


2.1 Data Analytics Lifecycle Overview ... • FIGURE 2-1 Key roles for a successful analytics project • Business User: Someone who understands the domain area and usually benefits from the resu lts. Th is person can consult and advise the project team on the context of the project, the value of the results, and how the outputs will be operationalized. Usually a business analyst, line manager, or deep subject matter expert in the project domain fulfills thisrole. • Project Sponsor: Responsible for the genesis of the project. Provides the impetusand requirements for the project and defines the core business problem. Generally provides the funding and gauges the degree of value from the final outputs ofthe working team. This person set sthe priorities for the project and clarifies the desired outputs. • Proj ect Manage r: Ensures that key milestones and objectives are met on time and at the expected quality. • Busin ess Intelligence Analyst : Provides business domain expertise based on a deep understanding of the data, key performance indicators (KPis), key metrics, and business intelligence from a reporting perspective. Business Intelligence Analystsgenerally create dashboards and reports and have knowl- edge of the data feeds and sources. • Database Administrator (DBA): Provisions and configures the database environment to support the analytics needs of the working team. These responsibilities may include provid ing access to key databases or tables and ensuring the appropriate security levelsare in place related to the data repositories. • Dat a Engineer: Leverages deep technical skillsto assist with tuning SQL queries for data manage- ment and data extraction, and provides support for data ingestion into the analytic sandbox, which


DATA ANALYTICS LIFECYCLE was discussed in Chapter 1, \"Introduction to Big Data Analytics.\" Whereas the DBA sets up and config- ures the databases to be used, the data engineer executes the actual data extractions and performs substantial data manipulation to facilitate the analytics. The data engineer works closely with the data scientist to help shape data in the right ways for analyses. o Data Scientist: Provides subject matter expertise for analytical techniques, data modeling, and applying valid analytical techniques to given business problems. Ensures overall analytics objectives are met. Designs and executes analytical methods and approaches with the data available to the project. Although most of these roles are not new, the last two roles-data engineer and data scientist-have become popular and in high demand [2] as interest in Big Data has grown. 2.1.2 Background and Overview of Data Analytics Lifecycle The Data Analytics Lifecycle defines analytics process best practices spanning discovery to project completion. The lifecycle draws from established methods in the realm of data analytics and decision science. This synthesis was developed after gathering input from data scientists and consulting estab- lished approaches that provided input on pieces of the process. Several of the processes that were consulted include these: o Scientific method [3], in use for centuries, still provides asolid framework for thinking about and deconstructing problems into their principal parts. One of the most valuable ideas of the scientific method relates to forming hypotheses and finding ways to test ideas. o CRISP-OM [4] provides useful input on ways to frame analytics problems and is apopular approach for data mining. o Tom Davenport's DELTA framework [5]: The DELTA framework offers an approach for data analytics projects, including the context of the organization's skills, datasets, and leadership engagement. o Doug Hubbard's Applied Information Economics (AlE) approach [6]: AlE provides aframework for measuring intangibles and provides guidance on developing decision models, calibrating expert estimates, and deriving the expected value of information. o \"MAD Skills\" by Cohen et al. [7] offers input for several of the techniques mentioned in Phases 2-4 that focus on model planning, execution, and key findings. figure 2-2 presents an overview ofthe Data Analytics Lifecycle that includes six phases. Teams commonly learn new things in a phase that cause them to go back and refine the work done in prior phases based on new insights and information that have been uncovered. for this reason, figure 2-2 is shown as acycle. The circular arrows convey iterative movement between phases until the team members have sufficient information to move to the next phase. The callouts include sample questions to ask to help guide whether each of the team members has enough information and has made enough progress to move to the next phase of the process. Note that these phases do not represent formal stage gates; rather, they serve as criteria to help test whether it makes sense to stay in the current phase or move to the next.


··.·.·..·..:..:·-\\..·~~? 2.1 Data Analytlcs Lifecycle Overview Do I have enough Information to draft an analytic plan and share for peer review? Do I have enough good quality data to start building the model? Is t he model robust Do I have a good Idea enough? Have we about the type of model failed for sure? to try? Can I refine the analytic plan? FIGURE 2-2 Overview ofData Analytics Lifecycle Here is a brief overview of the main phases of the Data Analytics Lifecycle: • Phase 1- Discovery: In Phase 1, the team learns the business domain, including relevant history such as whether the organization or business unit has attempted similar projects in the past from which they can learn. The team assesses the resources available to support the project in terms of people, technology, time, and data. Important activities in this phaseinclude fram ing the business problem as an analytics challenge that can be addressed in subsequent phases and formulating ini- tial hypotheses (IHs) to test and begin learn ing the data. • Phase 2- Data prepa ration: Phase 2 requires the presence of an analytic sandbox, in which the team can work with data and perform analytics for the duration of the project. The team needs to execute ext ract, load, and transform (ELT) or extract, transform and load (ETL) to get data into the sandbox. The ELT and ETL are sometimes abbreviated as ETLT. Data should be t ransformed in the ETLT process so t he team can work with it and analyze it. In t his phase, the team also needs to famil- iarize itself with the data thoroughly and take steps to condition the data (Section 2.3.4).


DATA ANALYTICS LIFECYCLE • Phase 3-Model planning: Phase 3 is model planning, where the team determines the methods, techniques, and workflow it intends to follow for the subsequent model building phase. The team explores the data to learn about the relationships between variables and subsequently se lects key variables and the most suitable models. • Phase 4-Model building: In Phase 4, the team develops datasets for testing, trai ning, and produc- tion purposes. In addition, in this phase the team buildsand executes models based on the work done in the model planning phase. The team also considerswhether itsexisting toolswill suffice for running the models, or if it will need a more robust environment for executing models and workflows (for example, fast hardware and parallel processing, if applicable). • Phase 5-Communicate results: In Phase 5, the team, in collaboration with major stakeholders, determines if the results of the project are asuccess or afailure based on the criteria developed in Phase 1. The team should identify key findings, quantify the business value, and develop a narrative to summarize and convey findings to stakeholders. • Phase 6-0perationalize: In Phase 6, the team delivers final reports, briefings, code, and technical documents. In addition, the team may run a pilot project to implement the modelsin a production envi ronment. Once team members have run models and produced findings, it is critical to frame these results in a way that is tailored to the audience that engaged the team . Moreover, it is critical to frame the results of the work in a manner that demonstrates clear value. If the team performs a technically accurate analysis but fails to translate the results into a language that resonates with the audience, people will not see the value, and much of the time and effort on the project will have been wasted. The rest of the chapter is organized as follows. Sections 2.2-2.7 discuss in detail how each of the six phases works, and Section 2.8 shows a case study of incorporating the Data Analytics Lifecycle in a real- world data science project. 2.2 Phase 1: Discovery The first phase of the Data Analytics Lifecycle involves discovery (Figure 2-3).1n this phase, the data science team must learn and investigate the problem, develop context and understanding, and learn about the data sources needed and available for the project. In addition, the team formulates initial hypotheses that can later be tested with data. 2.2.1 Learning the Business Domain Understanding the domain area of the problem is essential. In many cases, data scientists will have deep computational and quantitative knowledge that can be broadly applied across many disciplines. An example of this role would be someone with an advanced degree in applied mathematics or statistics. These data scientistshave deep knowledge of the methods, techniques, and ways for applying heuris- tics to a variety of business and conceptual problems. Others in this area may have deep knowledge of a domain area, coupled with quantitative expertise. An example of this would be someone with a Ph.D. in life sciences.This person would have deep knowledge of a field of study, such as oceanography, biology, or genetics, with some depth of quantitative knowledge. At this early stage in the process, the team needs to determine how much business or domain knowledge the data scientist needs to develop models in Phases 3and 4. Theearlier the team can make this assessment


2.2 Phase 1: Discovery the better, because t he decision helps dictate the resources needed for the project team and ensures the tea m has t he right balance of domain knowledge and technica l expertise. Do I have enough Inform ation to draft an analytic plan and share for peer review? FIGURE 2-3 Discovery phase 2.2.2 Resources As part of t he discovery phase, the team needs to assess the resources available to support the proj ect. In this context, resources include technology, tools, systems, data, and people. During this scoping, consider the available tools and technology t he team will be using and the types of systems needed for later phases to operat ionalize the models. In additio n, try to evaluate the level of analytica l sophisticat ion within the organization and gaps that may exist related to tools, technology, and skills. For instance, for th e model being developed to have longevity in an organization, consider what types of skillsand roles will be required that may not exist today.For the proj ect to have long-term success,


DATA ANALYTIC$ LIFECVCLE what types of skills and roles will be needed for the recipients of the model being developed? Does the requisite level of expertise exist within the organization today, or will it need to be cultivated? Answering these questions will influence the techniques the team selects and the kind of implementation the team chooses to pursue in subsequent phases of the Data Analytics lifecycle. In addition to the skills and computing resources, it is advisable to take inventory of the types of data available to the team for the project. Consider if the data available is sufficient to support the project's goals. The team will need to determine whether it must collect additional data, purchase it from outside sources, or transform existing data. Often, projects are started looking only at the data available. When the data is less than hoped for, the size and scope of the project is reduced to work within the constraints of the existing data. An alternative approach is to consider the long-term goals of this kind of project, without being con- strained by the current data. The team can then consider what data is needed to reach the long-term goals and which pieces of this multistep journey can be achieved today with the existing data. Considering longer-term goals along with short-term goals enables teams to pursue more ambitious projects and treat a project as the first step of a more strategic initiative, rather than as a standalone initiative. It is critical to view projects as part of a longer-term journey, especially if executing projects in an organization that is new to Data Science and may not have embarked on the optimum datasets to support robust analyses up to this point. Ensure the project team has the right mix of domain experts, customers, analytic talent, and project management to be effective. In addition, evaluate how much time is needed and if the team has the right breadth and depth of skills. After taking inventory of the tools, technology, data, and people, consider if the team has sufficient resources to succeed on this project, or ifadditional resources are needed. Negotiating for resources at the outset of the project, while seeping the goals, objectives, and feasibility, is generally more useful than later in the process and ensures sufficient time to execute it properly. Project managers and key stakeholders have better success negotiating for the right resources at this stage rather than later once the project is underway. 2.2.3 Framing the Problem Framing the problem well is critical to the success of the project. Framing is the process of stating the analytics problem to be solved. At this point, it is a best practice to write down the problem statement and share it with the key stakeholders. Each team member may hear slightly different things related to the needs and the problem and have somewhat different ideas of possible solutions. For these reasons, it is crucial to state the analytics problem, as well as why and to whom it is important. Essentially, the team needs to clearly articulate the current situation and its main challenges. As part of this activity, it is important to identify the main objectives of the project, identify what needs to be achieved in business terms, and identify what needs to be done to meet the needs. Additionally, consider the objectives and the success criteria for the project. What is the team attempting to achieve by doing the project, and what will be considered \"good enough\" as an outcome of the project? This is critical to document and share with the project team and key stakeholders. It is best practice to share the statement ofgoals and success criteria with the team and confirm alignment with the project sponsor's expectations. Perhaps equally important is to establish failure criteria. Most people doing projects prefer only to think of the success criteria and what the conditions will look like when the participants are successful. However, this is almost taking a best-case scenario approach, assuming that everything will proceed as planned


2.2 Phase 1: Discovery and the project team will reach its goals. However, no matter how well planned, it is almost impossible to plan for everything that will emerge in a project. The failure criteria will guide the team in understanding when it is best to stop trying or settle for the results that have been gleaned from the data. Many times people will continue to perform analyses past the point when any meaningful insights can be drawn from the data. Establishing criteria for both success and failure helps the participants avoid unproductive effort and remain aligned with the project sponsors 2.2.4 Identifying Key Stakeholders Another important step is to identify the key stakeholders and their interests in the project. During these discussions, the team can identify the success criteria, key risks, and stakeholders, which should include anyone who will benefit from the project or will be significantly impacted by the project. When interviewing stakeholders, learn about the domain area and any relevant history from similar analytics projects. For example, the team may identify the results each stakeholder wants from the project and the criteria it will use to judge the success of the project. Keep in mind that the analytics project is being initiated for areason. It is critical to articulate the pain points as clearly as possible to address them and be aware of areas to pursue or avoid as the team gets further into the analytical process. Depending on the number of stakeholders and participants, the team may consider outlining the type of activity and participation expected from each stakeholder and partici- pant. This will set clear expectations with the participants and avoid delays later when, for example, the team may feel it needs to wait for approval from someone who views himself as an adviser rather than an approver of the work product. 2.2.5 Interviewing the Analytics Sponsor The team should plan to collaborate with the stakeholders to clarify and frame the analytics problem. At the outset, project sponsors may have a predetermined solution that may not necessarily realize the desired outcome. In these cases, the team must use its knowledge and expertise to identify the true underlying problem and appropriate solution. For instance, suppose in the early phase of a project, the team is told to create arecommender system for the business and that the way to do this is by speaking with three people and integrating the product recommender into alegacy corporate system. Although this may be avalid approach, it is important to test the assumptions and develop a clear understanding of the problem. The data science team typically may have a more objective understanding of the problem set than the stakeholders, who may be suggesting solutions to agiven problem. Therefore, the team can probe deeper into the context and domain to clearly define the problem and propose possible paths from the problem to a desired outcome. In essence, the data science team can take a more objective approach, as the stakeholders may have developed biases over time, based on their experience. Also, what may have been true in the past may no longer be a valid working assumption. One possible way to circumvent this issue is for the project sponsor to focus on clearly defining the requirements, while the other members of the data science team focus on the methods needed to achieve the goals. When interviewing the main stakeholders, the team needs to take time to thoroughly interview the project sponsor, who tends to be the one funding the project or providing the high-level requirements. This person understands the problem and usually has an idea of a potential working solution. It is critical


DATA ANALYTICS LIFECYCLE to thoroughly understand t he sponsor's perspective to guide the team in getting started on the proj ect. Here are some ti ps for interviewing project sponsors: • Prepare for the interview; draft questions, and review with coll eagues. • Use open-ended questi ons; avoid asking lead ing questions. • Probe for details and pose follow-up questions. • Avoid filling every silence in t he co nversation; give the other person time to think. • Let the sponsors express t hei r ideas and ask clarifying questions, such as \"Why? Is that correct? Ist his idea on target?Isthere anything else?\" • Use active listening techniques; repeat back what was heard to make sure t he team heard it correctly, or reframe what was sa id. • Try to avoid expressing the team'sopinions, which can introduce bias; instead, focuson listening. • Bemindful of the body language of the interviewersand stakeholders; use eye contact where appro- priate, and be attentive. • Minimize distractions. • Document what t he team heard, and review it with the sponsors. Following is a brief list of common questions that are helpful to ask during the discovery phase when interviewing t he project sponsor. The responses wi ll begin to shape the scope of the project and give the team an idea of the goals and objectives of the project. • What business problem ist he team trying to solve? • What is t he desired outcome of the proj ect? • What data sources are available? • What industry issues may impact t he analysis? • What timelines need to be considered? • Who could provide insight into the project? • Who has final decision-making authority on the project? • How wi ll t he focusand scope of t he problem change if the following dimensions change: • Time: Analyzing 1year or 10 years' worth of data? • People: Assess impact of changes in resources on project timelin e. • Risk: Conservative to aggressive • Resources: None to unlimited (tools, technology, systems) • Size and attributes of data: Including internal and external data sou rces


2.2 Phase 1: Discovery 2.2.6 Developing Initial Hypotheses Developing aset of IHs is akey facet of the discovery phase. This step involves forming ideas that the team can test with data. Generally, it is best to come up with a few primary hypotheses to test and then be creative about developing several more. These IHs form the basis of the analytical tests the team will use in later phases and serve as the foundation for the findings in Phase 5. Hypothesis testing from a statisti- cal perspective is covered in greater detail in Chapter 3, \"Review of Basic Data Analytic Methods Using R.\" In this way, the team can compare its answers with the outcome of an experiment or test to generate additional possible solutions to problems. As a result, the team will have amuch richer set of observations to choose from and more choices for agreeing upon the most impactful conclusions from a project. Another part of this process involves gathering and assessing hypotheses from stakeholders and domain experts who may have their own perspective on what the problem is, what the solution should be, and how to arrive at asolution. These stakeholders would know the domain area well and can offer suggestions on ideas to test as the team formulates hypotheses during this phase. The team will likely collect many ideas that may illuminate the operating assumptions of the stakeholders. These ideas will also give the team opportunities to expand the project scope into adjacent spaces where it makes sense or design experiments in a meaningful way to address the most important interests of the stakeholders. As part of this exercise, it can be useful to obtain and explore some initial data to inform discussions with stakeholders during the hypothesis-forming stage. 2.2.7 Identifying Potential Data Sources As part of the discovery phase, identify the kinds of data the team will need to solve the problem. Consider the volume, type, and time span of the data needed to test the hypotheses. Ensure that the team can access more than simply aggregated data. In most cases, the team will need the raw data to avoid introducing bias for the downstream analysis. Recalling the characteristics of Big Data from Chapter 1, assess the main characteristics of the data, with regard to its volume, variety, and velocity of change. Athorough diagno- sis of the data situation will influence the kinds of tools and techniques to use in Phases 2-4 of the Data Analytics lifecycle.ln addition, performing data exploration in this phase will help the team determine the amount of data needed, such as the amount of historical data to pull from existing systems and the data structure. Develop an idea of the scope of the data needed, and validate that idea with the domain experts on the project. The team should perform five main activities during this step of the discovery phase: o Identify data sources: Make alist of candidate data sources the team may need to test the initial hypotheses outlined in this phase. Make an inventory ofthe datasets currently available and those that can be purchased or otherwise acquired for the tests the team wants to perform. o Capture aggregate data sources: This is for previewing the data and providing high-level under- standing. It enables the team to gain aquick overview ofthe data and perform further exploration on specific areas. It also points the team to possible areas of interest within the data. o Review the raw data: Obtain preliminary data from initial data feeds. Begin understanding the interdependencies among the data attributes, and become familiar with the content ofthe data, its quality, and its limitations.


DATA ANALYTICS LIFECYCLE • Evaluate the data structures and tools needed: The data type and structure dictate which toolsthe team can use to analyze the data. This evaluation gets the team thinking about which technologies may be good candidates for the project and how to start getting access to these tools. • Scope the sort of data infrastructure needed for this type of problem: In addition to the tools needed, the data influences the kind of infrastructure that 's required, such as disk storage and net- work capacity. Unlike many traditional stage-gate processes, in which the team can advance only when specific criteria are met, the Data Ana lytics Lifecycleisintended to accommodate more ambiguity. This more closely reflects how data science projects work in real-life situati ons. For each phase of the process, it is recomm ended to pass certain checkpoints as a way of gauging whether the team is ready to move to the next phase of the Data Analytics Lifecycle. The team can move to the next phase when it has enough information to draft an analytics plan and share it for peer review. Although a peer review of the plan may not actually be required by the project, creating the plan is a good test of the team's grasp of the busin ess problem and the team's approach to addressing it. Creating the analytic plan also requires a clear understanding of the domain area, the problem to be solved, and scoping of the data sources to be used. Developing success criteria early in the project clarifies the problem definition and helps the team when it comes time to make choices about the analytical methods being used in later phases. 2.3 Phase 2: Data Preparation The second phase of the Data Analytics Lifecycle involves data preparation, which includes the steps to explore, preprocess, and condition data prior to modeling and analysis. In this phase, the team needs to create arobust environment in which it can explore thedata that isseparate from aproduction environment. Usually, thisisdone by preparing an analytics sandbox. To get thedata into thesandbox, the team needs to perform ETLT, by a combination of extracting, transforming, and loading data into the sandbox. Once the data isin the sa ndbox, the team needs to learn about the data and become familiar with it. Understanding the data in detail is critical to the success of the project. The team also must decide how to condition and transform data to get it into a format to facilitate subsequent analysis.The tea m may perform data visua liza- tions to help team members understand the data, including its trends, outliers, and relationships among data variables. Each of these steps of the data preparation phase is discussed throughout this section. Data preparation tendsto be the most labor-intensive step in the analytics lifecycle.ln fact, it iscommon for teams to spend at least SOo/o of adata science project's time in this critical phase. if the team cannot obtain enough dataof sufficient quality, it may be unable to perform the subsequent stepsin the lifecycle process. Figure 2-4 shows an overview of the Data Analytics Lifecycle for Phase 2. Thedata preparation phase is generally the most iterative and the one that teams tend to underestimate most often.This is because most teams and leaders are anxious to begin analyzing thedata, testing hypotheses, and getting answersto some of the questions posed in Phase 1. Many tend to jump into Phase 3 or Phase 4 to begin rapid ly developing models and algorithms without spending the time to prepare the data for modeling. Consequently, teams come to realize the data they are working with does not allow them to execute the modelsthey want, and they end up back in Phase 2 anyway.


......
  
Download Download From Gdrive
Telegram