PREDICTIVE ANALYTICS PDF

adminComment(0)

PDF | On Mar 1, , Donald E. Brown and others published Predictive Analytics INTRODUCTION. PDF | The data presents a survey of predictive analytics models using machine learning. Predictive analytics encompasses a variety of statistical techniques from predictive modeling, machine learning, and data mining that analyze current and .


Predictive Analytics Pdf

Author:CALLIE VALENCIA
Language:English, German, Dutch
Country:Nepal
Genre:Environment
Pages:775
Published (Last):09.09.2016
ISBN:420-5-64127-847-4
ePub File Size:28.54 MB
PDF File Size:11.59 MB
Distribution:Free* [*Sign up for free]
Downloads:23705
Uploaded by: KIRSTEN

Introduction to Predictive Analytics. Pekka Malo, Assist. Prof. (statistics). Aalto BIZ / Department of Information and Service Economy. Healthcare presents the perfect storm for predictive analytics. .. Reports/ MedicareFeeforSvcPartsAB/downloads/DRGdescpdf for a list of the DRG codes. Predictive Analytics: What Is It? What Does it Do? 1 .. lagemahgunste.ml~ gareth/ISL/ISLR%20Fourth%lagemahgunste.ml Simplified.

Why the NSA wants all your data: machine learning supercomputers to fight terrorism. How companies ascertain untold, private truths — how Target figures out you're pregnant and Hewlett-Packard deduces you're about to quit your job. How judges and parole boards rely on crime-predicting computers to decide how long convicts remain in prison.

How does predictive analytics work? This jam-packed book satisfies by demystifying the intriguing science under the hood. For future hands-on practitioners pursuing a career in the field, it sets a strong foundation, delivers the prerequisite knowledge, and whets your appetite for more.

A truly omnipresent science, predictive analytics constantly affects our daily lives. A former Columbia University professor, he is a renowned speaker, educator, and leader in the field. Apart from identifying prospects, predictive analytics can also help to identify the most effective combination of product versions, marketing material, communication channels and timing that should be used to target a given consumer.

The goal of predictive analytics is typically to lower the cost per order or cost per action. Fraud detection[edit] Fraud is a big problem for many businesses and can be of various types: These problems plague firms of all sizes in many industries.

Some examples of likely victims are credit card issuers, insurance companies,[24] retail merchants, manufacturers, business-to- business suppliers and even services providers. A predictive model can help weed out the "bads" and reduce a business's exposure to fraud.

Predictive modeling can also be used to identify high-risk fraud candidates in business or the public sector. Mark Nigrini developed a risk-scoring method to identify audit targets. He describes the use of this approach to detect fraud in the franchisee sales reports of an international fast-food chain. Each location is scored using 10 predictors. The 10 scores are then weighted to give one final overall risk score for each location.

The same scoring approach was also used to identify high-risk check kiting accounts, potentially fraudulent travel agents, and questionable vendors. A reasonably complex model was used to identify fraudulent monthly reports submitted by divisional controllers.

This type of solution utilizes heuristics in order to study normal web user behavior and detect anomalies indicating fraud attempts.

Portfolio, product or economy-level prediction[edit] Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year.

These types of problems can be addressed by predictive analytics using time series techniques see below. They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power. Project risk management When employing risk management techniques, the results are always to predict and benefit from a future scenario.

The capital asset pricing model CAP-M "predicts" the best portfolio to maximize return.

Myths, Misconceptions and Methods

Probabilistic risk assessment PRA when combined with mini-Delphi techniques and statistical approaches yields accurate forecasts. These are examples of approaches that can extend from project to market, and from near to long term.

Underwriting see below and other business approaches identify risk management as a predictive method. Underwriting[edit] Many businesses have to account for risk exposure due to their different services and determine the cost needed to cover the risk. For example, auto insurance providers need to accurately determine the amount of premium to charge to cover each automobile and driver. A financial company needs to assess a borrower's potential and ability to pay before granting a loan.

For a health insurance provider, predictive analytics can analyze a few years of past medical claims data, as well as lab, pharmacy and other records where available, to predict how expensive an enrollee is likely to be in the future. Predictive analytics can help underwrite these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data.

Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Technology and big data influences[edit] Big data is a collection of data sets that are so large and complex that they become awkward to work with using traditional database management tools. The volume, variety and velocity of big data have introduced challenges across the board for capture, storage, search, sharing, analysis, and visualization.

Examples of big data sources include web logs, RFID, sensor data, social networks, Internet search indexing, call detail records, military surveillance, and complex data in astronomic, biogeochemical, genomics, and atmospheric sciences.

Big Data is the core of most predictive analytic services offered by IT organizations. Regression techniques[edit] Regression models are the mainstay of predictive analytics. The focus lies on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there are a wide variety of models that can be applied while performing predictive analytics.

Some of them are briefly discussed below. Linear regression model[edit] The linear regression model analyzes the relationship between the response or dependent variable and a set of independent or predictor variables.

This relationship is expressed as an equation that predicts the response variable as a linear function of the parameters.

These parameters are adjusted so that a measure of fit is optimized. Much of the effort in model fitting is focused on minimizing the size of the residual, as well as ensuring that it is randomly distributed with respect to the model predictions. The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals.

This is referred to as ordinary least squares OLS estimation and results in best linear unbiased estimates BLUE of the parameters if and only if the Gauss-Markov assumptions are satisfied. Once the model has been estimated we would be interested to know if the predictor variables belong in the model—i. To do this we can check the statistical significance of the model's coefficients which can be measured using the t- statistic.

This amounts to testing whether the coefficient is significantly different from zero. It measures predictive power of the model i. Discrete choice models[edit] Multivariate regression above is generally used when the response variable is continuous and has an unbounded range.

Often the response variable may not be continuous but rather discrete. If the dependent variable is discrete, some of those superior methods are logistic regression, multinomial logit and probit models.

Logistic regression and probit models are used when the dependent variable is binary. Logistic regression[edit] Main article: Logistic regression In a classification setting, assigning outcome probabilities to observations can be achieved through the use of a logistic model, which is basically a method which transforms information about the binary dependent variable into an unbounded continuous variable and estimates a regular multivariate model See Allison's Logistic Regression for more information on the theory of Logistic Regression.

The Wald and likelihood-ratio test are used to test the statistical significance of each coefficient b in the model analogous to the t tests used in OLS regression; see above. A test assessing the goodness-of-fit of a classification model is the "percentage correctly predicted". Multinomial logistic regression[edit] An extension of the binary logit model to cases where the dependent variable has more than 2 categories is the multinomial logit model.

In such cases collapsing the data into two categories might not make good sense or may lead to loss in the richness of the data. The multinomial logit model is the appropriate technique in these cases, especially when the dependent variable categories are not ordered for examples colors like red, blue, green. Probit regression[edit] Probit models offer an alternative to logistic regression for modeling categorical dependent variables. Even though the outcomes tend to be similar, the underlying distributions are different.

Probit models are popular in social sciences like economics. A good way to understand the key difference between probit and logit models is to assume that the dependent variable is driven by a latent variable z, which is a sum of a linear combination of explanatory variables and a random noise term. In the logit model we assume that the random noise term follows a logistic distribution with mean zero.

In the probit model we assume that it follows a normal distribution with mean zero. Logit versus probit[edit] The probit model has been around longer than the logit model. They behave similarly, except that the logistic distribution tends to be slightly flatter tailed. One of the reasons the logit model was formulated was that the probit model was computationally difficult due to the requirement of numerically calculating integrals.

Modern computing however has made this computation fairly simple. The coefficients obtained from the logit and probit model are fairly close. However, the odds ratio is easier to interpret in the logit model. Practical reasons for choosing the probit model over the logistic model would be: There is a strong belief that the underlying distribution is normal The actual event is not a binary outcome e.

Time series models[edit] Time series models are used for predicting or forecasting the future behavior of variables. These models account for the fact that data points taken over time may have an internal structure such as autocorrelation, trend or seasonal variation that should be accounted for.

As a result, standard regression techniques cannot be applied to time series data and methodology has been developed to decompose the trend, seasonal and cyclical component of the series.

Predictive analytics and data mining

Modeling the dynamic path of a variable can improve forecasts since the predictable component of the series can be projected into the future. Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models AR and moving-average MA models. Jenkins combines the AR and MA models to produce the ARMA autoregressive moving average model which is the cornerstone of stationary time series analysis.

ARIMA autoregressive integrated moving average models on the other hand are used to describe non-stationary time series.

Box and Jenkins suggest differencing a non stationary time series to obtain a stationary series to which an ARMA model can be applied. Non stationary time series have a pronounced trend and do not have a constant long-run mean or variance.

Box and Jenkins proposed a three-stage methodology which includes: The identification stage involves identifying if the series is stationary or not and the presence of seasonality by examining plots of the series, autocorrelation and partial autocorrelation functions. Finally the validation stage involves diagnostic checking such as plotting the residuals to detect outliers and evidence of model fit.

In recent years time series models have become more sophisticated and attempt to model conditional heteroskedasticity with models such as ARCH autoregressive conditional heteroskedasticity and GARCH generalized autoregressive conditional heteroskedasticity models frequently used for financial time series.

In addition time series models are also used to understand inter-relationships among economic variables represented by systems of equations using VAR vector autoregression and structural VAR models.

Survival or duration analysis[edit] Survival analysis is another name for time to event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social sciences like economics, as well as in engineering reliability and failure time analysis.

Censoring and non-normality, which are characteristic of survival data, generate difficulty when trying to analyze the data using conventional statistical models such as multiple linear regression.

Hence the normality assumption of regression models is violated. The assumption is that if the data were not censored it would be representative of the population of interest. In survival analysis, censored observations arise whenever the dependent variable of interest represents the time to a terminal event, and the duration of the study is limited in time.

An important concept in survival analysis is the hazard rate, defined as the probability that the event will occur at time t conditional on surviving until time t.

Another concept related to the hazard rate is the survival function which can be defined as the probability of surviving to time t. Most models try to model the hazard rate by choosing the underlying distribution depending on the shape of the hazard function. A distribution whose hazard function slopes upward is said to have positive duration dependence, a decreasing hazard shows negative duration dependence whereas constant hazard is a process with no memory usually characterized by the exponential distribution.

Some of the distributional choices in survival models are: F, gamma, Weibull, log normal, inverse normal, exponential etc. All these distributions are for a non-negative random variable.

Duration models can be parametric, non-parametric or semi-parametric. Some of the models commonly used are Kaplan-Meier and Cox proportional hazard model non parametric.

Decision tree learning Globally-optimal classification tree analysis GO-CTA also called hierarchical optimal discriminant analysis is a generalization of optimal discriminant analysis that may be used to identify the statistical model that has maximum accuracy for predicting the value of a categorical dependent variable for a dataset consisting of categorical and continuous variables.

The output of HODA is a non-orthogonal tree that combines categorical variables and cut points for continuous variables that yields maximum predictive accuracy, an assessment of the exact Type I error rate, and an evaluation of potential cross-generalizability of the statistical model. Most models try to model the hazard rate by choosing the underlying distribution depending on the shape of the hazard function. A distribution whose hazard function slopes upward is said to have positive duration dependence, a decreasing hazard shows negative duration dependence whereas constant hazard is a process with no memory usually characterized by the exponential distribution.

Some of the distributional choices in survival models are: F, gamma, Weibull, log normal, inverse normal, exponential etc. All these distributions are for a non-negative random variable. Duration models can be parametric, non-parametric or semi-parametric. Some of the models commonly used are Kaplan-Meier and Cox proportional hazard model non parametric.

Adobe Analytics

Decision tree learning Globally-optimal classification tree analysis GO-CTA also called hierarchical optimal discriminant analysis is a generalization of optimal discriminant analysis that may be used to identify the statistical model that has maximum accuracy for predicting the value of a categorical dependent variable for a dataset consisting of categorical and continuous variables. The output of HODA is a non-orthogonal tree that combines categorical variables and cut points for continuous variables that yields maximum predictive accuracy, an assessment of the exact Type I error rate, and an evaluation of potential cross-generalizability of the statistical model.

Hierarchical optimal discriminant analysis may be thought of as a generalization of Fisher's linear discriminant analysis. Optimal discriminant analysis is an alternative to ANOVA analysis of variance and regression analysis, which attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA and regression analysis give a dependent variable that is a numerical variable, while hierarchical optimal discriminant analysis gives a dependent variable that is a class variable.

Classification and regression trees CART are a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively.

Decision trees are formed by a collection of rules based on variables in the modeling data set: Rules based on variables' values are selected to get the best split to differentiate observations based on the dependent variable Once a rule is selected and splits a node into two, the same process is applied to each "child" node i.

Alternatively, the data are split as much as possible and then the tree is later pruned. Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules.

Related Post: OLD YELLER EPUB

A very popular method for predictive analytics is Leo Breiman's Random forests. Multivariate adaptive regression splines[edit] Multivariate adaptive regression splines MARS is a non-parametric technique that builds flexible models by fitting piecewise linear regressions.

An important concept associated with regression splines is that of a knot. Knot is where one local regression model gives way to another and thus is the point of intersection between two splines. In multivariate and adaptive regression splines, basis functions are the tool used for generalizing the search for knots.

Basis functions are a set of functions used to represent the information contained in one or more variables. Multivariate and Adaptive Regression Splines model almost always creates the basis functions in pairs. Multivariate and adaptive regression spline approach deliberately overfits the model and then prunes to get to the optimal model.

The algorithm is computationally very intensive and in practice we are required to specify an upper limit on the number of basis functions. Machine learning techniques[edit] Machine learning, a branch of artificial intelligence, was originally employed to develop techniques to enable computers to learn.

Today, since it includes a number of advanced statistical methods for regression and classification, it finds application in a wide variety of fields including medical diagnostics, credit card fraud detection, face and speech recognition and analysis of the stock market. In certain applications it is sufficient to directly predict the dependent variable without focusing on the underlying relationships between variables.

In other cases, the underlying relationships can be very complex and the mathematical form of the dependencies unknown. For such cases, machine learning techniques emulate human cognition and learn from training examples to predict future events. A brief discussion of some of these methods used commonly for predictive analytics is provided below.

A detailed study of machine learning can be found in Mitchell Neural networks[edit] Neural networks are nonlinear sophisticated modeling techniques that are able to model complex functions. Neural networks are used when the exact nature of the relationship between inputs and output is not known.

A key feature of neural networks is that they learn the relationship between inputs and output through training. There are three types of training in neural networks used by different networks, supervised and unsupervised training, reinforcement learning, with supervised being the most common one.

Some examples of neural network training techniques are backpropagation, quick propagation, conjugate gradient descent, projection operator, Delta-Bar-Delta etc. Some unsupervised network architectures are multilayer perceptrons, Kohonen networks, Hopfield networks, etc. Multilayer perceptron MLP [edit] The multilayer perceptron MLP consists of an input and an output layer with one or more hidden layers of nonlinearly-activating nodes or sigmoid nodes.

This is determined by the weight vector and it is necessary to adjust the weights of the network. The backpropagation employs gradient fall to minimize the squared error between the network output values and desired values for those outputs. The weights adjusted by an iterative process of repetitive present of attributes. Small changes in the weight to get the desired values are done by the process called training the net and is done by the training set learning rule.

Radial basis functions[edit] A radial basis function RBF is a function which has built into it a distance criterion with respect to a center. Such functions can be used very efficiently for interpolation and for smoothing of data. Radial basis functions have been applied in the area of neural networks where they are used as a replacement for the sigmoidal transfer function.

Such networks have 3 layers, the input layer, the hidden layer with the RBF non-linearity and a linear output layer. The most popular choice for the non-linearity is the Gaussian.

RBF networks have the advantage of not being locked into local minima as do the feed-forward networks such as the multilayer perceptron.

Predictive Analytics, Data Mining and Big Data

Support vector machines[edit] support vector machines SVM are used to detect and exploit complex patterns in data by clustering, classifying and ranking the data. They are learning machines that are used to perform binary classifications and regression estimations. They commonly use kernel based methods to apply linear classification techniques to non-linear classification problems. There are a number of types of SVM such as linear, polynomial, sigmoid etc. It is best employed when faced with the problem of 'curse of dimensionality' i.

The method does not impose a priori any assumptions about the distribution from which the modeling sample is drawn. It involves a training set with both positive and negative values.

The sign of that point will determine the classification of the sample. In the k-nearest neighbour classifier, the k nearest points are considered and the sign of the majority is used to classify the sample.

The performance of the kNN algorithm is influenced by three main factors: It can be proved that, unlike other methods, this method is universally asymptotically convergent, i.

See Devroy et al. Geospatial predictive modeling[edit] Conceptually, geospatial predictive modeling is rooted in the principle that the occurrences of events being modeled are limited in distribution. Occurrences of events are neither uniform nor random in distribution—there are spatial environment factors infrastructure, sociocultural, topographic, etc.

Geospatial predictive modeling attempts to describe those constraints and influences by spatially correlating occurrences of historical geospatial locations with environmental factors that represent those constraints and influences.

Geospatial predictive modeling is a process for analyzing events through a geographic filter in order to make statements of likelihood for event occurrence or emergence. Tools[edit] Historically, using predictive analytics tools—as well as understanding the results they delivered— required advanced skills. However, modern predictive analytics tools are no longer restricted to IT specialists[citation needed].

As more organizations adopt predictive analytics into decision-making processes and integrate it into their operations, they are creating a shift in the market toward business users as the primary consumers of the information. Business users want tools they can use on their own. These range from those that need very little user sophistication to those that are designed for the expert practitioner. The difference between these tools is often in the level of customization and heavy data lifting allowed.

Notable open source predictive analytic tools include: Such an XML-based language provides a way for the different tools to define predictive models and to share these between PMML compliant applications. PMML 4. Criticism[edit] There are plenty of skeptics when it comes to computers and algorithms abilities to predict the future, including Gary King, a professor from Harvard University and the director of the Institute for Quantitative Social Science.

Trying to understand what people will do next assumes that all the influential variables can be known and measured accurately. Everything from the weather to their relationship with their mother can change the way people think and act. All of those variables are unpredictable.

How they will impact a person is even less predictable. If put in the exact same situation tomorrow, they may make a completely different decision. This means that a statistical prediction is only valid in sterile laboratory conditions, which suddenly isn't as useful as it seemed before. Please help to improve this article by introducing more precise citations. A personality-based product recommender framework".

Electronic Markets: The International Journal on Networked Business. Understanding the Vital Signs of Your Business 1st ed. Bellevue, WA: Ambient Light Publishing. ISBN Can you pronounce health care predictive analytics? Retrieved March 3, The Huffington Post. Retrieved Myths, Misconceptions and Methods 1st ed. Palgrave Macmillan. Predictive Analytics: Advanced Engineering Informatics.

The Chronicle of Social Change. Eckerd Kids.

Navigation Bar

Retrieved April 4, Exploratory Data Analysis. Geospatial predictive modeling[edit] Conceptually, geospatial predictive modeling is rooted in the principle that the occurrences of events being modeled are limited in distribution. Greene, William Data Mining and Knowledge Discovery. F, gamma, Weibull, log normal, inverse normal, exponential etc.

Remember me on this computer. He describes the use of this approach to detect fraud in the franchisee sales reports of an international fast-food chain.