4.53 out of 5
4.53
323 reviews on Udemy

Machine Learning using Python

Linear & Logistic Regression, Decision Trees, XGBoost, SVM & other ML models in Python
Instructor:
Start-Tech Academy
16,836 students enrolled
English [Auto]
Learn how to solve real life problem using the Machine learning techniques
Advanced Machine Learning models such as Decision trees, Random Forest, SVM etc.
How to do basic statistical operations and run ML models in Python
Understanding of basics of statistics and concepts of Machine Learning
How to convert business problem into a Machine learning problem
In-depth knowledge of data collection and data preprocessing for Machine Learning problem

You’re looking for a complete Machine Learning course in Python that can help you launch a flourishing career in the field of Data Science and Machine Learning, right?

You’ve found the right Machine Learning course!

After completing this course, you will be able to:

· Confidently build predictive Machine Learning models using Python to solve business problems and create business strategy

· Answer Machine Learning related interview questions

· Participate and perform in online Data Analytics competitions such as Kaggle competitions

Check out the table of contents below to see what all Machine Learning models you are going to learn.

How will this course help you?

A Verifiable Certificate of Completion is presented to all students who undertake this Machine learning basics course.

If you are a business manager or an executive, or a student who wants to learn and apply machine learning, Python and predictive modelling in Real world problems of business, this course will give you a solid base for that by teaching you the most popular techniques of machine learning, Python and predictive modelling.

Why should you choose this course?

This course covers all the steps that one should take while solving a business problem through linear regression. This course will give you an in-depth understanding of machine learning and predictive modelling techniques using Python.

Most courses only focus on teaching how to run the analysis but we believe that what happens before and after running analysis is even more important i.e. before running analysis it is very important that you have the right data and do some pre-processing on it. And after running analysis, you should be able to judge how good your model is and interpret the results to actually be able to help your business.

What makes us qualified to teach you?

The course is taught by Abhishek and Pukhraj. As managers in Global Analytics Consulting firm, we have helped businesses solve their business problem using machine learning techniques using R, Python, and we have used our experience to include the practical aspects of data analysis in this course.

We are also the creators of some of the most popular online courses – with over 1 million enrollments and thousands of 5-star reviews like these ones:

This is very good, i love the fact the all explanation given can be understood by a layman – Joshua

Thank you Author for this wonderful course. You are the best and this course is worth any price. – Daisy

Our Promise

Teaching our students is our job and we are committed to it. If you have any questions about the course content, machine learning, Python, predictive modelling, practice sheet or anything related to any topic, you can always post a question in the course or send us a direct message.

Download Practice files, take Quizzes, and complete Assignments

With each lecture, there are class notes attached for you to follow along. You can also take quizzes to check your understanding of concepts of machine learning, Python and predictive modelling. Each section contains a practice assignment for you to practically implement your learning on machine learning, Python and predictive modelling.

Below is a list of popular FAQs of students who want to start their Machine learning journey-

What is Machine Learning?

Machine Learning is a field of computer science which gives the computer the ability to learn without being explicitly programmed. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns, and make decisions with minimal human intervention.

What are the steps I should follow to be able to build a Machine Learning model?

You can divide your learning process into 3 parts:

Statistics and Probability – Implementing Machine learning techniques require basic knowledge of Statistics and probability concepts. Second section of the course covers this part.

Understanding of Machine learning – Fourth section helps you understand the terms and concepts associated with Machine learning and gives you the steps to be followed to build a machine learning model

Programming Experience – A significant part of machine learning is programming. Python and R clearly stand out to be the leaders in the recent days. Third section will help you set up the Python environment and teach you some basic operations. In later sections there is a video on how to implement each concept taught in theory lecture in Python

Understanding of models – Fifth and sixth section cover Classification models and with each theory lecture comes a corresponding practical lecture where we actually run each query with you.

Introduction

1
Welcome to the course

Join the Machine Learning using Python Course. This comprehensive program designed for beginners and aspiring data scientists. No coding background or advanced math skills required. Learn Python setup, basic statistics, and popular machine learning models like linear regression, logistic regression, decision trees, support vector machines, and time series forecasting. With just 2 hours a day, in one month, you'll master predictive modeling and interview questions. Get personalized doubt-solving support from the instructors. Start your data science journey today!

2
Course Resources

Setting up Python and Jupyter notebook

1
Installing Python and Anaconda

Join this section to learn how to install Python and get a crash course specifically tailored for data science. Whether you're new to Python or have prior experience, follow the step-by-step instructions to install Python from www.anaconda.com. Learn how to navigate the installation process and set up relevant libraries. Get ready to explore Microsoft VS Code as your coding environment. Start your data science journey with the right tools and foundational knowledge.

2
This is a Milestone!

Congratulations on embarking on this course and being part of the top 20 percent of enrolled students. Stay motivated and complete the course with the same energy. Your dedication inspires us to continually enhance the course and address your queries. Rate the course and provide feedback on teaching quality to help us improve.

3
Opening Jupyter Notebook

Dive into the world of Jupyter Notebook with this tutorial. Learn three different methods to open Jupyter Notebook: through Anaconda Navigator, Anaconda Prompt, and Command Prompt. Follow step-by-step instructions to open Jupyter Notebook in your default browser and explore the home screen. Discover how to navigate through files and folders, change directories, and save files. Gain insights into the advantages of Jupyter Notebook over other tools and prepare for the upcoming lessons on Python syntax and Jupyter Notebook features.

4
Introduction to Jupyter

Learn the basics of Jupyter Notebook in this tutorial. Discover how to create new cells and execute code in Python. Understand the concept of cells and their role in organizing code, comments, and outputs. Explore the different cell formats, including code, markdown, and raw. Learn essential shortcuts for editing cells, changing formats, and navigating between editable and non-editable modes. Gain a solid foundation in using Jupyter Notebook and prepare for the upcoming lessons on Python operations.

5
Arithmetic operators in Python: Python Basics

In this tutorial, we will explore some of the basic arithmetic operations in Python using Jupyter Notebook. Learn how to perform addition, subtraction, multiplication, division, exponentiation, and modulus calculations. Observe the importance of using parentheses when dealing with multiple operators and how Python follows the BODMAS rule. Understand how to assign values to variables and perform arithmetic operations while defining variables. Additionally, discover Python's comparison operators, such as greater than, less than, equal to, greater than or equal to, and less than or equal to, and see how they return Boolean values based on the comparison results.

6
Quick coding exercise on arithmetic operators
7
Strings in Python: Python Basics

In this tutorial, we delve into working with strings in Python. Learn how to define and assign strings using single or double quotes, and understand the importance of quotation marks. Explore string variables and how to display their values using direct execution or the print function. Discover string formatting using the format method and how to replace specific words within a string. Dive into string indexing and slicing, understanding how to extract individual characters or a range of characters from a string using index positions. Learn about the concept of index and position, as well as negative indexing. Explore the use of step parameters for skipping characters while slicing strings. Understand Python's dynamic typing, where variable types are inferred based on their assigned values, and learn how to determine the type of a variable using the type function.

8
Quick coding exercise on String operations
9
Lists, Tuples and Directories: Python Basics

This video provides an overview of lists, tuples, and dictionaries in Python. It explains the similarities and differences between them, how to define and manipulate them, and their use cases. The video also touches on conditional statements and loops. Gain insights into the fundamentals of these data structures and enhance your Python skills.

10
Quick coding exercise on Tuples
11
Working with Numpy Library of Python

Dive into the world of NumPy, a fundamental library in Python for numerical computations. Discover how to create and manipulate NumPy arrays, from one-dimensional to multi-dimensional arrays. Learn about initializing arrays, generating sequences, and even creating random matrices. Understand the importance of maintaining uniform data types in NumPy arrays and explore slicing operations for efficient data extraction. Get ready to unlock the full potential of NumPy for high-performance vector and matrix operations in Python.

12
Quick coding exercise on NumPy Library
13
Working with Pandas Library of Python

Discover the power of Pandas, a versatile software library for data manipulation and analysis in Python. Learn how to import data from CSV files and explore its structure using Pandas functions. Manipulate and transform data using indexing techniques and understand the significance of headers and indexes. Dive into descriptive statistics to gain insights into your data, and uncover the power of loc and iloc for efficient data extraction. With Pandas as your tool, unlock the potential for advanced data analysis and uncover valuable insights.

14
Quick coding exercise on Pandas Library
15
Working with Seaborn Library of Python

Step into the world of Seaborn, a powerful data visualization library for Python. Discover how Seaborn complements and enhances the plotting capabilities of Matplotlib. Learn to plot distributions, histograms, and scatter plots with customizable features. Dive into the Iris dataset and explore various visualizations, including scatter plots and pair plots, to uncover patterns and insights in the data. This Python course provides a glimpse into the potential of Seaborn and sets the stage for deeper explorations in data analysis and visualization.

16
Python file for additional practice
17
Quiz

Integrating ChatGPT with Python

1
Integrating ChatGPT within JUPYTER notebook

Basics of statistics

1
Types of Data

In this video, we delve into the essentials that form the building blocks of statistical analysis. Learn about the crucial distinction between qualitative and quantitative data, exploring their subtypes: nominal and ordinal for qualitative data, and discrete and continuous for quantitative data. Discover how to identify and differentiate these data types, empowering you to choose the appropriate analysis techniques for your data. Prepare yourself with the key terminology discussed in this lecture for future statistical explorations.

2
Types of Statistics

Dive into the realm of statistics and its two fundamental types: descriptive and inferential. In this video, we'll focus on descriptive statistics, which provide insights into data through measures of center and dispersion. Uncover the power of average, median, mode, range, and standard deviation in understanding data distributions. Dive into frequency distributions and bar charts for qualitative data and histograms for quantitative data. Additionally, explore inferential statistics, enabling us to draw conclusions and predictions from sample observations, with a special focus on neural networks for deep learning.

3
Describing data Graphically

Join us in this informative video as we dive into the world of data summarization. Discover the power of frequency distributions, which allow us to organize qualitative and quantitative data into categories and determine the frequency of each category. Learn how to convert these distributions into visually appealing bar charts and construct histograms to visualize the distribution of data. Gain insights into the concepts of relative frequency and class width, and explore the properties of histograms. Lastly, we touch upon the important concept of normal distribution and its implications in data analysis.

4
Measures of Centers

Join us in this insightful video as we unravel the world of descriptive numerical measures that help us understand the center of data. We delve into four essential measures: mean, median, mode, and mid-range. Learn how to calculate the mean, which represents the average value, and understand the distinction between population mean and sample mean. Discover the concept of median, the middle value in ordered data, and how to handle odd and even numbers of observations. Explore the mode, the value that occurs most frequently, and its relevance in qualitative data analysis. Lastly, grasp the concept of mid-range, which represents the average of the highest and lowest values. Gain valuable insights into when to use each measure based on data characteristics and outliers. Stay tuned for our next video, where we will dive into measures of dispersion.

5
Measures of Dispersion

Dive into the world of measures of dispersion in this informative video lecture. Discover the three key measures: range, standard deviation, and variance. Learn how to calculate the range by finding the difference between the largest and smallest values, and understand its limitations when outliers are present. Explore the concepts of standard deviation and variance, their relationship, and their role in determining the spread of data. Gain insights into the significance of larger standard deviation values and their implications on data dispersion.

Introduction to Machine Learning

1
Introduction to Machine Learning

Dive into the world of machine learning and discover its potential. Understand how machines learn from past data and improve their performance. Explore its applications in various industries, from banking to healthcare. Learn the distinction between machine learning, statistics, and artificial intelligence. Dive into supervised and unsupervised learning and their practical implementations. Gain insights into regression and classification problems and how they drive decision-making. Uncover the significance of data analysis and predictive modeling in shaping the future.

2
Building a Machine Learning Model

This video outlines the seven crucial steps involved in building a machine learning model. Starting with problem formulation, it explains how to convert a business problem into a statistical problem. The subsequent steps cover data tidying and preprocessing, including data cleaning, filtering, missing value treatment, and outlier handling. The importance of splitting data into training and testing sets is emphasized, followed by training the model and assessing its performance. Finally, the video highlights the significance of using the prediction model, setting up a pipeline, monitoring outputs, and automating the scoring process.

Data Preprocessing

1
Gathering Business Knowledge

This video emphasizes the importance of understanding the business context when tackling a problem. It highlights the need to identify relevant variables and gather quality data for analysis, as the inputs greatly influence the outputs. The video discusses two approaches: primary research (interacting with stakeholders and experiencing the business firsthand) and secondary research (reviewing existing studies and reports). An example of cart abandonment in an online business is provided to illustrate the process.

2
Data Exploration

This video emphasizes the three crucial steps of gathering relevant data for analysis. Firstly, it highlights the importance of identifying the required data based on business knowledge and research objectives. Secondly, it discusses the process of requesting data from internal and external sources, including teams within the organization and external data providers. Lastly, it emphasizes the significance of conducting a quality check on the received data. Using the example of cart abandonment, the video demonstrates how business understanding guides the selection of specific data elements to collect.

3
The Dataset and the Data Dictionary

This video discusses the process of gathering and organizing data for analyzing house pricing in a real estate company. It highlights the importance of collating data from different sources and structuring it into a tabular format. The video emphasizes the need for a data dictionary that provides definitions of variables, including the dependent variable (price) and independent variables (e.g., crime rate, residential area proportion). It also mentions categorical variables, such as the presence of an airport or bus terminal.

4
Importing Data in Python

In this tutorial, the presenter guides the audience on how to import house price data into Python using Jupyter Notebook. The step-by-step process includes launching Jupyter Notebook, setting up the working directory, and importing necessary libraries like NumPy, Pandas, and Seaborn for data analysis and visualization. The tutorial also demonstrates how to check the working directory and import the CSV file containing house price data. The final output shows the first five rows of the dataset and its dimensions (506 rows and 19 columns).

5
Univariate analysis and EDD

In this tutorial, the presenter introduces univariate analysis, which focuses on analyzing individual variables in a dataset. The tutorial explains that descriptive statistics, such as mean, median, mode, range, quartiles, and standard deviations, are used to summarize and describe the data. For categorical variables, the count of each category is examined. The tutorial emphasizes the importance of Extended Data Dictionary (EDD) in identifying issues like outliers and missing values. The upcoming videos will delve into addressing these issues in data analysis.

6
EDD in Python

This video focuses on conducting exploratory data analysis (EDA) and provides variable descriptions for a dataset. The EDA begins with calculating descriptive statistics using describe() function. The analysis includes examining mean, median, minimum, maximum, quartiles, and standard deviations for each variable. The script highlights the importance of EDD (Extended Data Dictionary) in identifying missing values, differences between mean and median, skewness, and potential outliers. Scatter plots are used to visualize relationships between variables. Categorical variables are analyzed using count plots. The video concludes with key observations regarding missing values, outliers, and certain variables with limited variation.

7
Outlier Treatment

In this video, we explore the concept of outliers in data analysis and emphasizes the importance of addressing them before model training. Outliers, which deviate significantly from the overall pattern of a variable, can arise due to measurement or data entry errors. The script highlights the impact of outliers on mean, median, and standard deviation, emphasizing the need for careful handling to improve prediction accuracy. Various methods for identifying outliers, such as box plots, scatter plots, and histograms, are discussed. The script also presents approaches for treating outliers, including capping and floating, exponential smoothing, and the sigma approach.

8
Outlier Treatment in Python

This video provides a step-by-step guide on how to identify and treat outliers in Python using the pandas and numpy libraries. It covers various techniques, such as capping and flooring, for handling outliers. The video demonstrates how to identify outliers by computing percentile values and then replace them with alternative values. Additionally, it explores the transformation of variables to remove outliers without eliminating them completely. With detailed explanations and code examples, this video offers a comprehensive approach to outlier treatment in Python.

9
Missing Value Imputation

This video explores the common issue of missing values in datasets and discusses the challenges they pose in machine learning. It presents two options for handling missing values: removing the affected rows or replacing the missing values with neutral values. The video highlights various imputation techniques, such as imputing with zero, using central tendency measures (mean or median), assigning the most frequent category, or considering segment-specific means. The importance of utilizing business knowledge to select appropriate imputation methods is emphasized. Additionally, the video mentions that software tools can assist in identifying and filling missing values effectively.

10
Missing Value Imputation in Python

This video focuses on handling missing values in Python. It discusses the identification of missing values using the info function and emphasizes the importance of treating missing values for accurate analysis. The video demonstrates the use of the fillna function to impute missing values, specifically using the mean of the n_hos_beds variable as an example. It highlights the option to perform missing value imputation for all columns using the fillna function with df.mean. The video concludes by emphasizing the need for tailored solutions based on specific variables and highlights the significance of saving the modified data frame.

11
Seasonality in Data

This video introduces the concept of seasonality in data, where recurring patterns occur over time. It explains how seasonality can be influenced by various factors such as weather or holidays, leading to fluctuations in data. To address the impact of seasonality on modeling, a correction factor is calculated. The video demonstrates an example of removing seasonality from sales data by multiplying the values with the calculated factors. The resulting normalized data is then suitable for analysis and model fitting.

12
Bi-variate analysis and Variable transformation

In this informative video, we delve into the concept of bivariate analysis, which involves examining the relationships between two variables. We explore two popular methods: scatterplots and correlation matrices. By analyzing scatterplots, we determine whether there is a visible relationship between variables and assess if it's linear or nonlinear. Additionally, correlation matrices help us identify the strength and direction of correlations. We also discuss variable transformations, such as logarithmic and exponential functions, to achieve a more linear relationship. Join us as we apply these techniques to a dataset on house pricing and crime rates.

13
Variable transformation and deletion in Python

In this video, we revisit our observations from EDD (Exploratory Data Analysis). We have already corrected missing values and outliers. Now, we focus on transforming the crime rate variable to establish a linear relationship with the price variable. By taking the logarithm of the crime rate variable, adding one to avoid undefined values, and plotting a joint plot, we observe a more linear relationship. We also create an average distance variable to represent the distances from the employment hub. Finally, we remove the redundant distance variables and the bus terminal variable from the dataset.

14
Non-usable variables

In this video, we dive into the process of selecting relevant variables for analysis. We explore four important points to determine the usefulness of variables. Firstly, variables with a single unique value, such as "bus terminal," are removed as they provide no meaningful information. Secondly, variables with a low fill rate and a large number of missing values are considered for deletion, as imputing values may not accurately capture their impact on the output. Thirdly, sensitive variables that may lead to discrimination or regulatory issues are excluded. Lastly, the importance of business knowledge and the use of bivariate analysis to validate logical relationships between variables are emphasized.

15
Dummy variable creation: Handling qualitative data

This video focuses on the handling of categorical variables in regression analysis. It explains the need for creating dummy variables to represent non-numeric categories. The process involves assigning numerical values of 0 or 1 to each category, following a specific rule. The video clarifies the number of dummy variables required and emphasizes that nominal data cannot be assigned ordered numerical values. The interpretation of regression analysis results with dummy variables is discussed, which will be explored further in subsequent analysis.

16
Dummy variable creation in Python

This video explains the process of creating dummy variables in Python to convert non-numerical categorical variables into numerical values. The example focuses on two variables: "airport" with categories "yes" and "no," and "waterbody" with categories "lake," "river," "lake and river," and "none." The video demonstrates how dummy variables are generated using the pandas function "get_dummies()" and showcases the resulting transformed data. It also discusses the need to delete redundant variables and emphasizes the importance of case sensitivity in Python programming.

17
Correlation Analysis

This video explores the concept of correlation and correlation coefficient in data analysis. It explains how correlation helps identify the relationship between variables and classifies them as positively or negatively correlated. The correlation coefficient, ranging from -1 to 1, quantifies the correlation strength. However, it emphasizes that correlation does not imply causation and discusses the importance of distinguishing between the two. The video demonstrates the use of a correlation matrix to analyze multiple variables' correlations and highlights how identifying highly correlated independent variables can help avoid multicollinearity in statistical models.

18
Correlation Analysis in Python

This video explores the use of correlation metrics in data analysis. It explains how correlation matrices provide insights into the relationships between variables. By examining correlation coefficients, variables with high correlations to the dependent variable can be identified as important for analysis. The video also highlights the issue of multicollinearity caused by high correlations between independent variables and suggests methods for selecting variables to mitigate this problem. Ultimately, the video demonstrates how correlation metrics help determine variable importance and support data-driven decision-making.

19
Quiz

Linear Regression

1
The Problem Statement

This video delves into the fundamentals of linear regression, a simple yet powerful approach for supervised learning. It emphasizes the significance of understanding linear regression before exploring more complex machine learning methods. The video introduces key concepts and focuses on the widely used least square approach for fitting linear models. Using a house pricing dataset as an example, it showcases how linear regression can accurately predict house prices and estimate the impact of individual variables. This video serves as a comprehensive guide to mastering linear regression for effective predictive modeling.

2
Basic Equations and Ordinary Least Squares (OLS) method

In this video, we explore the concept of simple linear regression as a straightforward approach for predicting house prices based on a single predictor variable. By assuming a linear relationship between the predictor variable (number of rooms) and the target variable (house price), we formulate a mathematical equation to estimate the coefficients of the linear model. The video introduces the least square method as a measure of model fit and explains how to minimize the sum of squared residuals to obtain the optimal coefficients. The video emphasizes the importance of understanding the concepts behind linear regression and interpreting the results rather than memorizing the mathematical formulas.

3
Assessing accuracy of predicted coefficients

In this video, we explore the accuracy of regression coefficients beta 0 and Beta 1 in predicting house prices based on the number of rooms. We analyze a small sample of 506 observations and discuss the difference between the sample regression line and the population regression line. By calculating the standard error, we establish confidence intervals for the population coefficients. Additionally, we introduce hypothesis testing using the t statistic and P-value to determine the significance of the relationship between the predictor and response variables.

4
Assessing Model Accuracy: RSE and R squared

In this video, we evaluate the accuracy of our created model by looking at two key measures: the residual standard error (RSE) and the coefficient of determination (R-squared). The RSE, calculated as the average deviation of the response from the regression line, provides insights into the model's lack of fit. On the other hand, R-squared measures the proportion of variability in the response variable explained by the model. An R-squared value close to 1 indicates a strong fit, while values around 0 indicate a poor fit. Generally, an R-squared greater than 0.5 is considered good for most applications.

5
Simple Linear Regression in Python

In this tutorial, we explore how to build a simple linear regression model in Python using two different libraries: statsmodels.api and sklearn. We import the necessary libraries and define our dependent variable (Y) and independent variable (X). We fit the model, analyze the summary statistics including coefficients, p-values, and R-squared, and interpret the results. Additionally, we demonstrate how to predict values, plot the regression line, and provide insights into interpreting the slope and intercept of the line.

6
Multiple Linear Regression

In this tutorial, we delve into the concept of multiple linear regression, which allows us to analyze the relationship between a dependent variable and multiple predictor variables. We extend our analysis from simple linear regression to accommodate the complexity of real-world scenarios with numerous predictors. We discuss the mathematical formulation of the multiple regression equation and interpret the coefficients, which quantify the impact of each predictor on the dependent variable while holding other variables constant. We explore the significance of the coefficients through standard errors, t-values, and p-values. The tutorial also presents the results of a multiple linear regression model run on a housing dataset, highlighting the multiple R-squared and adjusted R-squared values that indicate the model's ability to explain the variance in the dependent variable.

7
The F - statistic

In this informative video, we explore the significance of the F-statistic in multiple linear regression. While individual t-values and p-values can suggest relationships between predictors and the response variable, they can lead to incorrect conclusions, especially with a large number of variables. We introduce the F-statistic, which considers the number of predictors, and discuss its role in determining the overall significance of the model. By comparing the F-statistic's p-value against a threshold, we can ensure that the chosen predictors have a significant impact on the response variable.

8
Interpreting results of Categorical variables

In this video, we dive into the results obtained for the categorical variables in our dataset. Two categorical variables, "airport" and "water body," were converted into corresponding dummy variables to fit the linear regression model. We interpret the coefficients and p-values to understand the impact of these variables on the house prices. The "airport" variable suggests a significant increase in house prices when an airport is present. However, the "water body" variables (lake and river) show no statistical evidence of influencing house prices.

9
Multiple Linear Regression in Python

In this tutorial, we explore the process of constructing multiple linear regression models in Python. We begin by utilizing the statsmodels and sklearn libraries to create our models. Through code examples, we demonstrate how to prepare the independent and dependent variables, add a constant term, fit the model, and obtain model summaries. The significance of variables and their coefficients are discussed, emphasizing the interpretation of signs and p-values. Additionally, we compare the results obtained from statsmodels and sklearn, highlighting their respective advantages.

10
Test-train split

The script discusses the significance of test data in assessing the accuracy of predictive models. It emphasizes the difference between training and test errors and the need to avoid overfitting. The three popular techniques for dividing data into training and test sets—validation set, leave one out cross-validation, and K-fold cross-validation—are explained. The goal is to find the model with the best test error, ensuring better predictions for unseen data.

11
Bias Variance trade-off


This video delves into the concept of the bias-variance tradeoff, a fundamental aspect of model evaluation. It explains how the expected test error is influenced by variance and bias, while acknowledging the presence of an irreducible error. Variance represents the sensitivity of a model's predictions to changes in the training data, while bias captures the error introduced by oversimplifying a complex relationship. The script illustrates how increasing model flexibility leads to higher variance and lower bias, emphasizing the need to strike a balance to minimize overall error. The tradeoff is visually depicted, highlighting the search for the optimal point where bias and variance are minimized.

12
Test train split in Python

This video demonstrates how to split data into training and test sets using the test train split function in Python's scikit-learn library. It explains the process of importing the function, defining the independent and dependent variables, specifying the test size ratio, and setting a random state for reproducibility. The video shows how to create a linear regression model, train it using the training set, and make predictions on the test set. Finally, it calculates the R-square values for both the training and test data to evaluate the model's performance.

13
Regression models other than OLS

This video introduces alternative linear models that provide improved prediction accuracy and model interpretability compared to the standard linear model. It highlights the limitations of ordinary least squares (OLS) regression, especially when dealing with a large number of variables or a small number of observations. The video discusses the concept of variable selection to exclude irrelevant variables and focuses on two types of methods: subset selection, where a subset of variables is used in the model, and shrinkage methods (regularization), which shrink coefficients towards zero. These alternative models aim to enhance both accuracy and interpretability.

14
Subset selection techniques

In this video, different types of subset selection techniques for linear models are discussed. The three main methods covered are best subset selection, forward stepwise selection, and backward stepwise selection. Best subset selection involves fitting separate regression models for each combination of predictor variables and selecting the best model based on R-squared. Forward stepwise selection starts with no predictors and gradually adds one variable at a time until all predictors are included. Backward stepwise selection begins with all predictors and removes them one by one. These techniques provide computationally efficient alternatives to best subset selection, although they may not guarantee the best model.

15
Shrinkage methods: Ridge and Lasso

In this video, we explore shrinkage methods for linear regression models. The two main techniques discussed are Ridge regression and Lasso. Ridge regression involves minimizing a modified quantity that includes a shrinkage penalty, which helps reduce the coefficient values towards zero. This technique improves model bias and variance trade-off and requires standardized predictor variables. On the other hand, Lasso performs variable selection by allowing some coefficients to become exactly zero. It leads to a more interpretable model compared to Ridge regression. The choice between the two methods depends on the number of predictor variables and the expected relationship with the response variable.

16
Ridge regression and Lasso in Python

Learn how to run ridge and lasso regression in Python in this lecture. Start by standardizing the data using the preprocessing module from sklearn. Then, create the Ridge object and fit the model. Find the best lambda value by using the validation curve and select the model with the highest R-squared value. Finally, fit the ridge model with the best lambda on the training dataset and calculate the R-squared values for both the training and test datasets.

17
Heteroscedasticity

This video addresses the issue of heteroskedasticity in models. Heteroskedasticity refers to the non-constant variance of error terms, often increasing with the response variable's values. Graphically, this is observed as a funnel-like shape when plotting residuals against the response variable. To handle heteroskedasticity, scaling down larger response values is recommended, using methods like taking the logarithm or square root. By doing so, the residuals tend to exhibit constant variance. This video explains how to identify and address heteroskedasticity in a model.

Introduction to the classification Models

1
Three classification models and Data set

In this section on classification models, we explore logistic regression, K-nearest neighbors (KNN), and Linear Discriminant Analysis (LDA). Classification problems involve predicting categorical variables, such as determining if a football player will score a goal or if a patient has a heart issue. These techniques provide accurate predictions without being computationally heavy. The training data focuses on predicting the selling potential of properties based on past transactions, classifying them as sold (1) or not sold (0) within three months. The data is preprocessed, and the upcoming videos will cover importing the dataset and implementing the models.

2
Importing the data into Python

In this script, we learn how to import a CSV file using the pandas library in Python. The script demonstrates the use of the pd.read_csv function to read the CSV file into a dataframe. The location of the CSV file is specified as the first argument, and the presence of headers is indicated by setting header=0. The script also mentions converting backslashes to forward slashes in the file location if using Windows.

3
The problem statements

This video introduces two types of business questions that can be answered using the model being built: prediction questions and inferential questions. Prediction questions focus on accurately predicting whether a house will be sold within three months of being listed, without considering the individual variables' impact. Inferential questions, on the other hand, aim to determine the importance and impact of each independent variable on the response variable. The video mentions using various classifiers to find answers to these questions.

4
Why can't we use Linear Regression?

This video highlights the limitations of using linear regression for classification tasks and introduces the concept of logistic regression. Using an example dataset, the video explains that linear regression cannot handle response variables with more than two levels and does not provide probability values. It also discusses the sensitivity of linear regression to outliers, leading to incorrect predictions. The upcoming discussion will delve into logistic regression and how it addresses these issues.

Logistic Regression

1
Logistic Regression

This video introduces logistic regression as a classification technique for predicting credit default. It explains the limitations of using linear regression for classification and demonstrates how logistic regression overcomes those limitations. By using the logistic function (sigmoid function), logistic regression ensures the output probability is bounded between zero and one, providing more meaningful interpretations. The video also introduces the concept of maximum likelihood method for estimating the coefficients in logistic regression. The next video will focus on training a logistic model using a single predictive variable.

2
Training a Simple Logistic Model in Python

This script demonstrates two methods for creating logistic regression models in Python. The first method uses the sklearn library, which is preferred by professionals due to its extensive documentation and implementation techniques. The second method utilizes the statsmodel library, which provides statistical information but has limited documentation and some bugs. The script explains the steps involved in creating the models, including importing the necessary libraries, preparing the data, fitting the model, and obtaining the coefficients. It highlights the differences between the two methods and emphasizes the advantages of using sklearn.

3
Result of Simple Logistic Regression

This video explores the significance of beta values, standard error, z value, and p value in logistic regression. The focus is on determining whether the predictive variable (price) has a significant impact on the response variable (sold). The video explains the null and alternative hypotheses, emphasizing the need to establish that beta one is non-zero. It describes the calculation of standard error, z value, and p value and highlights the importance of the p value in determining the significance of variables. The threshold for the p value is discussed, with lower values indicating a stronger impact on the response variable

4
Logistic with multiple predictors

This video discusses the extension of logistic regression to handle multiple predictors and the use of boundary limits to determine the probability of a binary response. It explains how the model is trained using maximum likelihood criteria to obtain the values of beta coefficients. The video also briefly mentions the possibility of handling multiple classes in the response variable and introduces linear discriminant analysis as an alternative technique. The upcoming lectures will delve into the details of linear discriminant analysis.

5
Training multiple predictor Logistic model in Python

This video guides the process of creating a logistic regression model with multiple independent variables in Python. It emphasizes reusing the code used for single-variable logistic regression by modifying the x and y variables accordingly. The video demonstrates how to remove the "sold" variable from the data frame to obtain the independent variable, x. It then explains how to fit the logistic regression model using the classifier object and obtain the coefficient values for each variable. The video also introduces the use of the set model and provides an overview of the model's summary, including coefficients, standard error, z-values, and p-values.

6
Confusion Matrix

This video introduces the concept of a confusion matrix, which is used to evaluate the performance of a trained model. The matrix is composed of true and predicted values, with the rows representing the predicted values and the columns representing the true values. The video explains how correct predictions are located on the diagonal of the matrix, while incorrect predictions are off the diagonal. It distinguishes between type one and type two errors, providing a memorable image to differentiate them. The video emphasizes that the cost of each type of error can vary and discusses how adjusting the boundary value can address this. It concludes by mentioning that the confusion matrix is a common method for assessing prediction accuracy and provides insights on how to create one using software packages.

7
Creating Confusion Matrix in Python

This video covers the prediction of values using a logistic regression model and the creation of a confusion matrix based on the predicted values. It demonstrates how to obtain predicted values and probabilities from the model using the "predict" and "predict_proba" functions. The video explains the significance of setting a boundary condition, which is typically 0.5 but can be adjusted based on the cost of false positive and false negative errors. It showcases how to generate a confusion matrix using the "confusion_matrix" function, highlighting true positives, true negatives, false positives, and false negatives. The impact of changing the boundary condition is also discussed.

8
Evaluating performance of model

This video focuses on explaining the terms used to evaluate the performance of a classifier. It introduces the concept of a confusion matrix and defines terms such as true negative, false positive, false negative, and true positive. Various performance measures are discussed, including the false positive rate, precision, specificity, sensitivity, and the area under the receiver operating characteristic (ROC) curve. The video illustrates how these measures relate to real-world scenarios and the associated costs and benefits. It emphasizes the importance of selecting the appropriate performance measure based on the specific requirements of the classifier.

9
Evaluating model performance in Python

In this lecture, the focus is on calculating performance metrics for machine learning models. The three metrics covered are precision, recall, and the Area Under the Curve (AUC) of the ROC curve. The presenter demonstrates how to compute these metrics in Python using the scikit-learn library. Precision measures the accuracy of positive predictions, recall evaluates the ability to correctly identify positive instances, and AUC assesses overall model performance. The lecture aims to equip learners with the skills to assess and compare the effectiveness of different models.

Linear Discriminant Analysis (LDA)

1
Linear Discriminant Analysis

This lecture provides an intuitive explanation of linear discriminant analysis (LDA), a technique often overlooked due to its mathematical complexity. The instructor simplifies LDA using a student fitness example and explains how conditional probabilities are calculated. The lecture highlights the importance of Bayes' Theorem and the assumptions of normal distribution in LDA. The benefits of LDA, its classification process, and the possibility of adjusting boundary conditions are also discussed. The lecture concludes with a mention of comparing LDA with logistic regression and evaluating prediction quality using the confusion matrix.

2
LDA in Python

This video tutorial guides viewers on training a Linear Discriminant Analysis (LDA) model using their own data. The instructor demonstrates importing the LDA function, fitting the data into the model, and making predictions. The creation of a confusion matrix is also shown to evaluate the model's performance in terms of false positives and negatives across all classes. The tutorial emphasizes the simplicity of running LDA in Python and highlights the significance of interpreting the confusion matrix.

K Nearest neighbors classifier

1
Test-Train Split

This video explains the importance of separating data into training and test sets when evaluating model performance. The concept of training error versus test error is introduced, highlighting the need to assess model accuracy on unseen data. The video discusses different techniques for data splitting, including the validation set approach, leave-one-out cross-validation, and k-fold cross-validation. The limitations and benefits of each technique are briefly outlined, with a focus on the validation set approach. This approach involves randomly dividing the data into training and test sets, with a suggested split ratio of 80:20.

2
Test-Train Split in Python

In this video tutorial, viewers are guided on how to split available data into test and train sets using the train_test_split function from the scikit-learn library. The process involves defining four variables: independent train variable, independent test variable, dependent train variable, and dependent test variable. The tutorial demonstrates the use of the train_test_split function and explains the parameters involved, such as the test size and random state. The script also highlights the importance of evaluating the model's performance using the R-squared values of the test set rather than the training set.

3
K-Nearest Neighbors classifier

This video explains the intuition behind the K-nearest neighbors (KNN) classifier. KNN classifies observations based on the conditional probability of belonging to each class, without assuming any functional relationship. The concept is illustrated using a simple diagram, showcasing how the value of K influences the classification decision. The importance of choosing the optimal K value and the impact of variable scale on KNN results are discussed. The video also mentions the process of standardizing variables to address the issue of variable scale

4
K-Nearest Neighbors in Python: Part 1

This lecture focuses on creating a K-nearest neighbors (KNN) model in Python. The video demonstrates the standardization of independent variables using the preprocessing module's StandardScaler function. The standardized data is then used to train a KNN model with a specified number of neighbors (K). The accuracy of the model is evaluated using the confusion matrix and the accuracy score. The script provides an example with K=1 and K=3, showcasing how the accuracy of the model can vary based on the chosen number of neighbors.

5
K-Nearest Neighbors in Python: Part 2

This lecture demonstrates how to create a single K-nearest neighbors (KNN) model for multiple values of K using grid search. The script imports grid search CV from the model selection module of scikit-learn and creates a dictionary of parameter values for K. The grid search CV object is then trained using the KNN classifier and the dictionary of parameters. The best parameter and model are obtained using the attributes best_params_ and best_estimator_. The script showcases an example with ten values of K and evaluates the accuracy of the optimized KNN model.



6
Quiz

Comparing results from 3 models

1
Understanding the results of classification models

This video discusses the interpretation of model results and compares the accuracy of three classifiers: logistic regression, linear discriminant analysis (LDA), and K-nearest neighbors (KNN). The script emphasizes the importance of examining the p-values and beta values in logistic regression and LDA models to understand the impact of variables on the response variable. However, KNN lacks such interpretability due to its non-parametric nature. The confusion matrices are presented to evaluate the accuracy of the classifiers on a test set, with LDA performing the best. Business insights and the choice of the optimal classifier can be derived from these results.

2
Summary of the three models

This video summarizes the essential steps for classification problem-solving. It begins with data collection, followed by data preprocessing, including outlier treatment, missing value imputation, and variable transformation. Model training is then performed, utilizing templates for logistic regression, linear discriminant analysis, and K-nearest neighbors. The importance of iterations to explore alternative decisions is emphasized. Comparing the performance of different models using techniques like the confusion matrix helps in selecting the best model. The choice of the model depends on the problem type: prediction or interpretation. Finally, the selected model can be used for predicting future observations.

Simple Decision Trees

1
Introduction to Decision trees

This Video introduces decision trees as a popular and interpretable machine learning technique. It highlights the applicability of decision trees in predicting patterns in complex data and their popularity in the business and data science community. The script mentions the topics that will be covered in the series, including regression trees, classification trees, tree pruning, bagging, random forest, and boosting techniques. The script specifically focuses on using a dataset related to movies to build a regression tree for predicting box office collection. Preprocessing steps and data import are mentioned as upcoming topics.

You can view and review the lecture materials indefinitely, like an on-demand channel.
Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side!
4.5
4.5 out of 5
323 Ratings

Detailed Rating

Stars 5
183
Stars 4
100
Stars 3
34
Stars 2
2
Stars 1
4