Career in data science

A career in data science has a lot to offer, from hands-on learning to the chance to contribute to real-world projects.

A career in data science is a highly sought-after and rewarding field that is projected to continue growing in demand. As businesses and organizations increasingly rely on data to make decisions, the need for skilled data scientists to extract insights from that data is becoming more pressing.

Career opportunities in data science

There are many career opportunities in data science, including roles such as:

Data Analyst: responsible for collecting, analyzing, and interpreting large sets of data.

Data Engineer: responsible for designing, building, and maintaining the infrastructure and systems that support data science efforts.

Machine Learning Engineer: responsible for designing and developing models that can learn from data, and deploying those models to production.

Business Intelligence Analyst: responsible for creating and maintaining reporting and analysis systems that provide insights to support business decision-making.

Data Scientist: responsible for using statistical and machine learning techniques to extract insights from data and communicate those insights to stakeholders.

Research Scientist: responsible for developing new techniques and algorithms in the field of machine learning and data science, often in an academic or research setting.

Data science: an amalgamation of different skills

A career in data science typically involves using a combination of computer science, statistics, and domain expertise to analyze large sets of data and extract insights that can be used to inform business decisions. This can include tasks such as building predictive models, identifying patterns and trends in data, and developing algorithms to automate decision-making.

Data science is an interdisciplinary field and demand is high, thus data scientists are in high demand across many industries, including technology, finance, healthcare, retail, and more.

The role played by data science professionals

A data scientist is a professional who is responsible for using statistical and machine-learning techniques to extract insights from data and communicate those insights to stakeholders. They play a key role in the data-driven decision-making process in an organization.

Specific responsibilities

The specific responsibilities of a data scientist can vary depending on the industry and the company, but some common tasks include:

  • Collecting and cleaning large sets of data from various sources.
  • Exploring and analyzing the data using statistical and machine learning techniques.
  • Building and implementing models that can learn from data.
  • Communicating insights and findings to stakeholders through visualizations, reports, and presentations.
  • Deploying models to production systems.
  • Continuously monitoring the performance of models and updating them as needed.
  • Collaborating with cross-functional teams to identify new opportunities for data-driven decision-making.

Data scientists often work with large and complex data sets, and they need to be proficient in a variety of tools and technologies, such as programming languages like Python and R, data visualization tools like Tableau and PowerBI, and machine learning libraries like sci-kit-learn and TensorFlow.

Data Scientists are in high demand across many industries, including technology, finance, healthcare, retail, and more. With the growth in data and the increasing importance of data-driven decision making, the role of data scientist is becoming increasingly important in organizations.

Prerequisites to study data science

To pursue a career in data science, it is typically recommended to have a strong background in mathematics and computer science, as well as experience with programming languages such as Python or R. Many data scientists also have advanced degrees in fields such as statistics, computer science, or electrical engineering.

In addition to strong technical skills, data scientists should also have excellent problem-solving and communication skills. The ability to translate complex technical concepts into plain language and to work with cross-functional teams is essential to be successful in this field.

There are various roles and career paths within the field of data science. Some data scientists may specialize in a particular area, such as machine learning or natural language processing, while others may work on a wide range of projects. Some of the popular roles in data science are a data analyst, data engineer, data architect, machine learning engineer, and data scientist.

The demand for data scientists continues to rise as organizations of all sizes and industries look to leverage data to drive growth and improve decision-making. According to a report from Glassdoor, a data scientist is among the top jobs in the United States and is expected to continue growing in demand in the coming years.

Salary packages for data science professionals

Salary packages for data science roles vary depending on factors such as location, industry, experience level, and specific job responsibilities. However, in general, data science roles tend to be well-paying.

According to data from Glassdoor, the average salary for a data scientist in the United States is around $120,000 per year, with some positions paying as much as $160,000 or more. Data engineers and machine learning engineers tend to earn slightly less, with an average salary of around $105,000 per year. Business intelligence analysts and data analysts tend to earn slightly less, with an average salary of around $70,000 – $90,000 per year.

It’s important to note that the salary packages also vary based on the location, with the highest paying locations being San Francisco, Seattle and New York City. Also, the level of experience, skill set and certifications would also have an impact on the salary package.

Salary of a data scientist in India

The salary of a data scientist in India can vary depending on factors such as location, industry, experience level, and specific job responsibilities. However, on average, data science roles in India tend to be well-paying.

According to data from Glassdoor, the average salary for a data scientist in India is around INR 12,00,000 per year (or roughly USD 16,500), with some positions paying as much as INR 20,00,000 (or roughly USD 28,000) or more. Data engineers and machine learning engineers tend to earn slightly less, with an average salary of around INR 8,00,000 (or roughly USD 11,000) per year. Business intelligence analysts and data analysts tend to earn slightly less, with an average salary of around INR 6,00,000 (or roughly USD 8,500) per year.

It’s important to note that the salary packages also vary based on the location, with the highest paying locations being the metropolitans such as Mumbai, Delhi, and Bengaluru. Also, the level of experience, skill set and certifications would also have an impact on the salary package.

Conclusion

In conclusion, a career in data science is a challenging and rewarding field that is growing in demand. With strong technical skills, problem-solving abilities, and communication skills, data scientists can find a wide range of opportunities in various industries. With the right education, skills, and experience, you can be well on your way to a successful career in data science.

Machine learning for beginners

Machine learning is a rapidly growing field that is changing the way we interact with technology. It is a method of teaching computers to learn from data, without explicitly programming them. This allows computers to identify patterns and make predictions, making it a powerful tool for solving complex problems.

If you’re new to machine learning, it can be overwhelming to know where to start. In this article, we will provide a beginner’s guide to machine learning, covering the basics and providing an overview of the most common techniques.

Supervised and unsupervised learning

First, it’s important to understand the difference between supervised and unsupervised learning. Supervised learning is when the computer is provided with labeled data, which means that the correct output is already known. This type of learning is used for tasks such as image classification, where the computer is shown an image and must identify what is in the image.

Unsupervised learning, on the other hand, is when the computer is not provided with labeled data. Instead, it must identify patterns and relationships within the data on its own. This type of learning is used for tasks such as clustering, where the computer groups similar data together.

Overfitting of models

Another important concept in machine learning is overfitting. This occurs when a model is too complex and performs well on the training data but poorly on new, unseen data. To prevent overfitting, it’s important to use techniques such as cross-validation and regularization.

Important machine learning algorithms

There are several popular machine learning algorithms that are commonly used, including:

Linear regression: used for predicting a continuous outcome

Linear regression is a supervised machine learning algorithm used for predicting a continuous outcome. The goal of linear regression is to find the best linear relationship between the input variables (also known as independent variables or predictors) and the output variable (also known as the dependent variable or target). It does this by finding the line of best fit, represented by the equation:

y=b0+b1x2+….+bn*xn

Where y is the predicted value, x1, x2, …, xn are the input variables, and b0, b1, b2, …, bn are the coefficients that need to be learned. These coefficients are learned by minimizing the difference between the predicted values and the true values.

Linear regression is a simple and interpretable algorithm that makes it easy to understand the relationship between the input variables and the output variable. However, it has some limitations, such as the assumption that the relationship between the variables is linear, which may not always be the case. In such situations, more complex algorithms such as polynomial regression or non-linear regression may be used.

Linear regression can be implemented in various programming languages such as Python, R, and Matlab. The most popular libraries for implementing Linear Regression are scikit-learn, statsmodels and tensorflow.

In summary, Linear Regression is a basic yet powerful algorithm for predicting a continuous outcome, it is easy to implement and interpret, and it is widely used in various fields such as finance, economics, and engineering.

Logistic regression: used for predicting a binary outcome

Logistic regression is a supervised machine learning algorithm used for predicting a binary outcome. It is a variation of linear regression, where the goal is to model the probability of a certain class or event occurring. The logistic function (also called the sigmoid function) is used to map the input variables to a probability between 0 and 1. This function is represented by the equation:

p(x) = 1 / (1 + e^-(b0 + b1x1 + b2x2 + … + bn*xn))

Where x1, x2, …, xn are the input variables, b0, b1, b2, …, bn are the coefficients that need to be learned, and p(x) is the predicted probability of the event occurring.

The logistic regression algorithm uses the logistic function to estimate the probability of the event occurring and uses a threshold (usually 0.5) to classify the outcome as either 0 or 1.

Logistic regression is a widely used algorithm for classification problems, it is easy to implement and interpret, and it can handle both linear and non-linear relationships between the input variables and the output variable. However, it has some limitations, such as the assumption that the relationship between the variables is log-linear, which may not always be the case. In such situations, more complex algorithms such as decision trees or support vector machines may be used.

Logistic regression can be implemented in various programming languages such as Python, R, and Matlab. The most popular libraries for implementing Logistic Regression are scikit-learn, statsmodels and tensorflow..

In summary: Logistic Regression is a powerful algorithm for predicting a binary outcome, it is easy to implement and interpret, and it is widely used in various fields such as medicine, finance and social sciences.

Decision trees: used for both classification and regression tasks

Decision Trees is a supervised machine learning algorithm used for both classification and regression tasks. It is a tree-based model where each internal node represents a test on an attribute, each branch represents the outcome of a test, and each leaf node represents a class label.

The idea behind decision trees is to recursively partition the data into subsets based on the values of the input features. The algorithm starts at the root node and selects the feature that best splits the data into subsets with the most similar class labels. The process is repeated on each subset of the data until a stopping criterion is met. The final result is a tree of decisions that can be used to make predictions for new data.

One of the main advantages of decision trees is their interpretability. They are easy to understand and visualize, and they can handle both categorical and numerical features. However, decision trees can be prone to overfitting, especially when the tree becomes too deep. This can be addressed by using techniques such as pruning, which removes branches that do not add much value to the tree.

Another popular variation of decision trees is random forests, which is an ensemble of decision trees. Random forests use multiple decision trees and combine their predictions to improve the overall performance of the model.

Decision Trees can be implemented in various programming languages such as Python, R, and Matlab. The most popular libraries for implementing Decision Trees are scikit-learn, R’s rpart package, and caret package.

In summary: Decision Trees is a powerful algorithm for both classification and regression tasks, it is easy to interpret and understand, it can handle both categorical and numerical features but can be prone to overfitting. The Random Forest is an ensemble of Decision Trees, which improve the overall performance of the model.

Random forests: an ensemble of decision trees

Random Forest is an ensemble machine learning algorithm used for both classification and regression tasks. It is a variation of decision trees, where multiple decision trees are trained and combined to make predictions. The idea behind random forests is to reduce the variance and increase the accuracy of the model by averaging the predictions of multiple decision trees.

A random forest algorithm generates multiple decision trees by training them on different subsets of the data. This is done by randomly selecting a subset of the features and a subset of the data points to use for each tree. The final prediction is made by averaging the predictions of all the trees in the forest.

One of the main advantages of random forests is that they are less prone to overfitting than single decision trees. This is because each tree in the forest is trained on a different subset of the data, which reduces the correlation between the trees. Additionally, random forests can handle both categorical and numerical features and are able to capture non-linear interactions between the features.

Random Forest can be implemented in various programming languages such as Python, R, and Matlab. The most popular libraries for implementing Random Forest are scikit-learn, R’s randomForest package, and caret package.

In summary: Random Forest is a powerful ensemble machine learning algorithm used for both classification and regression tasks, it’s less prone to overfitting than single decision trees, it can handle both categorical and numerical features and is able to capture non-linear interactions between the features. It combines the predictions of multiple decision trees to improve the overall performance of the model.

k-nearest neighbors: used for classification and regression tasks

k-nearest neighbors (k-NN) is a supervised machine learning algorithm used for both classification and regression tasks. It is a non-parametric method, which means that it does not make any assumptions about the underlying distribution of the data.

The idea behind k-NN is to classify a new point based on its similarity to other points in the data. The algorithm works by finding the k-nearest data points to the new point, and then the majority class or the average value of the k-nearest points is used to make the prediction.

One of the main advantages of k-NN is its simplicity and interpretability. It requires very little training data and can handle both categorical and numerical features. However, it can be sensitive to the choice of k and to the scale and distribution of the data. To overcome these issues, techniques such as normalization and feature scaling are often used.

The k-NN algorithm can be implemented in various programming languages such as Python, R, and Matlab. The most popular libraries for implementing k-NN are scikit-learn, R’s class and caret package.

In summary: k-nearest neighbors (k-NN) is a simple and interpretable algorithm used for both classification and regression tasks. It classifies a new point based on its similarity to other points in the data. It has the advantage of requiring very little training data and can handle both categorical and numerical features. However, it can be sensitive to the choice of k and to the scale and distribution of the data.

Support vector machines: used for classification tasks

Support Vector Machines (SVMs) is a supervised machine learning algorithm used for classification tasks. It is a powerful and versatile algorithm that can handle both linear and non-linear data.

The goal of an SVM is to find the best boundary (also called a hyperplane) that separates the data points into different classes. The boundary that maximizes the margin, which is the distance between the boundary and the closest data points from each class, is chosen as the best boundary. These closest data points from each class are called support vectors.

SVMs can handle both linear and non-linear data by using a technique called the kernel trick. The kernel trick transforms the input data into a higher-dimensional space where the data becomes linearly separable. In this new space, the algorithm finds the best boundary, and then it is transformed back to the original space.

One of the main advantages of SVMs is that they can handle high-dimensional data and have a high accuracy. However, they can be sensitive to the choice of the kernel and the parameters of the model. Additionally, SVMs can be less efficient with large datasets.

SVMs can be implemented in various programming languages such as Python, R, and Matlab. The most popular libraries for implementing SVMs are scikit-learn, R’s e1071 package, and MATLAB’s fitcsvm.

In summary: Support Vector Machines (SVMs) is a powerful and versatile algorithm used for classification tasks. It finds the best boundary that separates the data points into different classes, maximizing the margin. It can handle both linear and non-linear data using the kernel trick. SVMs have a high accuracy, but they can be sensitive to the choice of the kernel and the parameters of the model, and they can be less efficient with large datasets.

Neural networks: used for a wide range of tasks

Neural Networks (NNs) are a type of machine learning algorithm inspired by the structure and function of the human brain. They are a set of algorithms that are designed to recognize patterns in data, by learning from examples.

A neural network is made up of layers of interconnected nodes, also known as artificial neurons. Each neuron receives inputs, performs a computation on them, and then produces an output. The layers of neurons are connected to each other, and the output of one layer becomes the input for the next layer. The last layer produces the final output of the network.

The most common type of neural network is the feedforward neural network, also known as the multi-layer perceptron (MLP). In this type of network, the information flows only in one direction, from the input layer to the output layer.

There are other types of neural networks, such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), which are suited for specific tasks such as natural language processing and image recognition.

One of the main advantages of neural networks is their ability to learn complex, non-linear relationships in the data. They can be trained to perform a wide range of tasks, from simple linear regression to complex image recognition. However, neural networks can be difficult to train and can require a large amount of data and computational resources. Additionally, the process of understanding and interpreting the internal workings of a neural network can be challenging.

Neural networks can be implemented in various programming languages such as Python, R, and Matlab. The most popular libraries for implementing neural networks are TensorFlow, Keras, and PyTorch.

In summary: Neural Networks (NNs) are a type of machine learning algorithm inspired by the structure and function of the human brain. They are designed to recognize patterns in data, by learning from examples. They can be trained to perform a wide range of tasks, from simple linear regression to complex image recognition. They can be difficult to train and can require a large amount of data and computational resources, but they can learn complex, non-linear relationships in the data.

Conclusion

Finally, it’s important to understand the role of feature engineering in machine learning. This is the process of transforming raw data into useful inputs for a model. Feature engineering can greatly improve the performance of a model, so it’s an important step in any machine learning project.

Machine learning is a vast field with a lot to learn, but by understanding the basics and familiarizing yourself with the most common techniques, you can start to build your own models and begin solving problems with machine learning.

How to execute R script in Power BI? a comprehensive guide

Execute R script in Power BI

In this article, I am going to discuss how we can use the analytical and visualization power of the R programming language within the Power BI. We can execute R script in Power BI to create data models, prepare reports, data cleaning, advanced data shaping and analytics, missing data computation, clustering, forecasting and many other advanced tasks.

Here are some articles in this series you may be interested in

R is arguably the most preferred language by Data Scientists. It is an open-source language backed by a vast community of developers and users. It has very rich libraries to perform almost all kinds of complex analysis.

The Microsoft Power BI provides a nifty feature of integrating the power of the R language. We can import R script within Power BI and thus perform even more complex analysis.

Installing R

To activate the R script into Power BI, you need R to be installed in the same computer you are using Power BI. You can install the R package from the CRAN distribution of the R project or go for Microsoft R open distribution(MRAN).

It will be good if you install a suitable R IDE too, like R studio. The free version of R studio can serve our purpose. The IDE helps us to check the R code if it has any errors. Because correcting errors of R code within Power BI can be difficult.

If you have already installed R and R studio you may have more than one version on your computer. Check the correct option of R while using it in File -> Options and settings -> R scripting.

Checking the R version installed in your computer
Checking the R version installed in your computer

Importing data using R script

We can import data using the R script too. The “Get data” option in the Power BI helps us to import data. Here is an example of importing data I have saved in CSV format on my computer.

See the below image consisting of screenshots of all the steps from my computer. The “Other” option under “Get data” has the option to put your R script to import the data.

See the 4th step in the below image where a window gets opened and I put the following R script:

dataset <- read.csv(file="E:/test.csv", header=TRUE, sep=",")

The code is very simple, which imports a .csv file from my computer’s E:/ drive with its original header and using “,” as the separator.

Steps to import data using R script

As the given CSV file has only one dataset so the next window showed the dataset with the variable name “dataset” as mentioned by me in the script.

The R script should return at least one data frame. The Power BI creates tables from each of the data frames. If the data frame has columns containing complex or vector values, in Power BI they will display error. Any field with “N/A” will display as “NULL”.

If the R script takes more than 30 minutes to execute then it will time out. If there is any interactive field in the R script, like user input, then it may halt the script’s execution. In case your R script contains any file location, define the full path instead of providing a relative path.

Execute R script in Power BI

Here I will demonstrate executing a simple R script. My purpose is just to show the process how we can run R script inside the Power BI. So, the script will be very simple here.

The data consists of two variable X and Y. The Y is dependent on the X variable. I will run the “nuralnet" library of R to create a prediction of Y using X.

I have written the code in the external R IDE i.e. R studio and executed it there before using it in Power BI to confirm if it is free from any error.

In the Visualizations pane of the Power BI, you can see the icon for executing the R script. We need to click this icon and provide the variables we are going to use in the R script.

Writing R script
Writing R script

Unless the R script gets executed, the report view will show a blank R script visualization window. Copy the code from R IDE i.e. R studio, paste in the R script editor and run the code.

Error handling

If we have any error in our R code, the Power BI will throw some error. For example here I intentionally gave wrong dataset name. And Power BI has clearly mentioned that error in its report view. See the below image.

Error handled by the Power BI
Error handled by the Power BI

The error says that the object ‘a’ not found. This is because the object here is the dataset variable named ‘dataset’.

Once I have corrected the mistake, the script gets executed and here is the output. The neuralnet is applied and the corresponding neuralnet architecture is created.

Executing the R script within Power BI
Executing the R script within Power BI

So here is a quick overview of how we can use the power of R programming language within the Power BI desktop. R is the most preffered data science language and backed by vast community of data scientists and analysts.

I tried to provide a comprehensive guide on how to execute R script in Power BI with all relevant screenshots while doing it myself on my computer. And hope that you may find it helpful while doing the same for the first time.

So, try the steps on your own following the steps as described here. Please comment below if you have any queries or suggestions.

What are logical functions in Power BI, and how to use them?

Logical functions in Power BI

The logical functions in Power BI are essential while writing DAX expressions. Logical functions help us in decision making to check if any condition is true or false.

Once the data has been extracted through Power Query, these DAX expressions help us to fetch important information from the data. Here is an article explaining the difference between Power Query and DAX, which you may be interested in.

The logical functions in Power BI I will discuss here are IF, AND, OR, NOT, IN and IFERROR. They are all true to their names and do the task exactly as they are used in English.

I will discuss them along with their application on a data set containing the area and production of different crops of different Indian states. Below is a glimpse of the dataset.

Dataset with crop production
Dataset with crop production

I have collected the data from web with the data scraping feature of Power BI. Here is the article where I have explained how you can take advantage of this nifty feature of loading data from the web in Power BI.

“IF” logical function

The IF function accepts three arguments. The expression of this logical function is as below. We can see that it has the same English conditional context and very easy to understand.

IF (expression, True_Info, False_Info)

The first argument of this function is a Boolean expression. If this expression has some positive value the IF function returns the second argument otherwise the third argument.

Let’s see a practical example of its use on the India_statewise_crop_production dataset. I have created a new column Production_category using the IF function. If the production is less than 10, then it is under LOW production_category; otherwise HIGH production_category.

Creating new column using IF function
Creating new column using IF function

“Nested IF” function

We can use IF within another IF function, which is called the nested IF function. It helps us to check more than one condition at a time.

For example, I have placed two conditions here. One is the earlier one I used in the IF function and added another that if Production is greater than 500 then the production is HIGH else MEDIUM.

See the result below, how the Production_category column has the new values according to the NESTED IF condition.

Nested IF function
Nested IF function

“AND” logical function

It can take two arguments. If both the arguments are correct, it returns TRUE else FALSE. Its syntax is as below:

AND (Logical_condition1, Logical_condition2)

I have applied the AND function to find out if the productivity is high or low. I have used AND to check if the conditions Area is less than 10 and production is higher than 200. If both the conditions are TRUE then it returns “High Productivity” else “Low Productivity”.

Use of AND function
Use of AND function

“OR” logical function

Unlike AND logical function, in the case of OR function if anyone condition holds true, the function returns TRUE. It returns FALSE only if both of the conditions are FALSE.

For the crop production data set, I have applied the OR function to check if both the conditions that are Area<10 and Production<20 are true then it should return “Low production” else “High Production”.

Use of OR function
Use of OR function

“NOT” function

The NOT logical function simply changes FALSE to TRUE and TRUE to FALSE. It is very simple to use. See the below example.

I have used NOT with the IF function. If the IF checks the condition Season=” Kharif”, if it is true, IF returns True, again the NOT function turns it to False. See the output column “Kharif_check”, it has False corresponding to Kharif and for other entries it has True.

Use of NOT function
Use of NOT function

“IN” logical function

The IN function lets us check the specific entries under a column and calculate corresponding values for other columns.

In this example, I wanted to calculate the total production for only three states “Assam”, “Bihar” and “Uttar Pradesh”. In order to do that, I have created one measure using the SUM and IN function nested under the CALCULATE function. And see the result on a card.

Use of IN Function
Use of IN Function

“IF ERROR” logical function

The IF ERROR is another very useful logical function that checks for any error and returns values accordingly. This function is very useful while checking arithmetic overflow or any other kind of errors.

The syntax for this function is as below:

IFERROR (Value, ValueIfError )

You can get the syntax guide when you will select the function in the Power BI editor, see in the below image. As soon as I have started to type the function name, Power BI IntelliSense guided me with the autocomplete and the syntax for the function.

Use of IFERROR function
Use of IFERROR function

In my example, I have checked if there is an error in the Crop column. In case of any error found it should return “Error”. As there was no such error in the column so the IFERROR column has the exact values as in the Crop column.

How to use the “COUNT” function in Power BI?

"COUNT" function in Power BI

The COUNT() is an important function in writing the DAX formula in Power BI used. It is one of the time intelligence functions of DAX, which means it can manipulate data using time periods like days, weeks, months, quarters etc. and then use them in analytics.

We apply DAX to slice and dice the data to extract valuable information. To import data from different data sources and perform required transformations we need to know the use of Power Query. If you are curious to know the difference between Power Query and DAX, Here is an article you may be interested in.

Use of COUNT() in Power BI

The syntax for count function is very simple, we have to pass only the column name as argument like below

Measure = COUNT (Table_name [Column_name])

Count function when applied on any column, it returns the count of cells containing numbers. So it returns only whole numbers and skips the blank cells. If any cell of a column does not contain anything (string, date or numerical) then the function returns blank.

Here is an example of the application of COUNT() on the data set I have on the rainfall of different Indian states. The dataset has three columns “SUBDIVISION” containing different ecological zones of the country, “YEAR” from 1901 to 2019 and “ANNUAL” containing rainfall in mm of the corresponding year.

Application of COUNT() in Power BI

I data I have collected from the web using the data scraping feature of Power BI desktop. Here is a glimpse of the dataset.

Glimpse of the rainfall data
Glimpse of the rainfall data

First, I have created a new measure using DAX (see here how can you create a new measure in Power BI). A measure has a default name “Measure” which I have changed to “Measure_count“.

Using COUNT() in a measure
Using COUNT() function in Power BI

Here you can see COUNT() is used to get the count of ANNUAL column cells having numbers. To see the result of COUNT() I have used a “Card”. The number “4090” in the card shows the cell count of the ANNUAL column having a number.

If we change the column and replace ANNUAL with SUBDIVISION, then the count function returns “4116”. This is because rainfall of all the subdivisions are not present in the ANNUAL column. We can check the difference and know how many subdivisions and year combinations do not have rainfall data.

The COUNTA() function

If a column consists of binary values like True and False, COUNT() fails to count them. To count such values COUNT() has another version which is COUNTA(). COUNTA() is for counting any logical value or text and also the empty cells of the column.

In this data set we dont have any logical values. If COUNTA() function is applied on the same columns i.e. ANNUAL and SUBDIVISION, the results are same as COUNT() gave.

The COUNTAX() function

For those columns which have values other than strings, digits, logical values, date like formulae then there is another useful variation of COUNT() which is COUNTAX(). It returns the count of non-blank rows evaluating the result of an expression on a table.

The DAX formula for COUNTAX() is:

COUNTAX ( <table>, <expression>)

It also returns whole number and unlike COUNTA() function, it iterates through the cells of that column, evaluates the expression and returns count of nonblank rows.

Here is an example of the application of COUNTAX() on the same table. I have used this function to calculate the count of row number of ANNUAL column for a particular YEAR in the rainfall table. I have used the FILTER() function nested under COUNTAX() to filter the particular rows corresponding to the YEAR=1910 and 2010.

Application of COUNTAX()

From the above figure we can see that the COUNTAX() function has returned two different whole numbers for two different years 1910 and 2010. This is because not all the SUBDIVISION has the record of annual rainfall for the year1910.

An overview of DAX in Power BI

An overview of DAX in Power BI

As the name suggests Data Analysis eXpressions or DAX in Power BI is nothing but collection of operators, functions and constants which we use in writing formula or expressions to return value/values. It is a native language for data analytics tools of Microsoft. DAX is also a highly versatile and functional language with the capacity to work with a relational database.

DAX helps us to dig into the data we already have in our hand to explore new information. It helps us to perform dynamic aggressions, slice and dice the data. It is different from Power Query with M language at its core. Power Query performs the data extraction from different sources. Whereas DAX is applied to the extracted data source for analysis purpose.

It is very common to confuse between DAX and Power Query. You can refer this article to know a detailed comparison between Power Query and DAX.

Excel formula is similar to the DAX formula. Anyone with experience in writing Excel formula finds it easy to write DAX formula. However, DAX is far advanced than the Excel worksheet formula.

DAX is mainly used to create “Measures” and “Calculated Columns”. Below is an example of creating a measure using DAX.

Example of DAX formula

Writing effective DAX formula is the key. An effective DAX formula helps us to get the most out of the data. Writing the DAX formula in Power BI is easy. Power BI DAX editor has a smart complete feature, which automatically prompts us with probable options.

Now let’s try writing a DAX formula to perform a simple calculation. I already have a data set in the Power BI desktop on the rainfall of different Indian subdivisions. The data was scraped from the web using the data scraping tool of Power BI. You can get the details of how to do it in this article.

Below is an example of how a DAX measure has been created on the Power BI desktop. The screenshots from my Power BI desktop shows the steps of creating a measure. The purpose of the measure is to create total annual rainfall.

First of all to create a new measure, right-click on the “Fields” pane of the Power BI desktop report/data window and then choose “New measure“.

Creating new measure
Creating new measure

The default name of the measure is “Measure“. I have changed it to “Rainfall“. As you start writing the function name Power BI starts suggesting with relevant functions name. Here I have selected “CALCULATE“. It is a very popular and frequently used function of DAX.

Steps for creating a measure using DAX
Steps for creating a measure using DAX

As we enter into the “CALCULATE” function, it starts to prompt us to show that it will accept an expression followed by filters. I have selected the “SUM” function and the “ANNUAL” column of the “rainfall_india” table inside it as we want to calculate the total annual rainfall.

With this, the measure has been created. We can check the “Rainfall” measure in the “Fields” pane under the “rainfall_india” table.

Nested function in DAX

Inside the “CALCULATE” function again I have chosen the “SUM” function. This is an example of a nested function, which is a function within another function. Nested functions help us to narrow down the query to achieve the desired result.

DAX can have up to 64 nested functions. Although using this many numbers of nested functions is very uncommon as debugging of such complex functions is very tough and the execution time of such functions is also high.

Using a measure in another measure

Another useful feature of the DAX formula is it allows using a measure already created within another measure. For example, if want to further narrow down the result to calculate the total annual rainfall of any particular subdivision, we can use the “Rainfall” measure we already created. Let’s see how to do it.

For example, we want to know the total annual rainfall of the state “Kerala“. The measure “Rainfall” calculates the total annual rainfall. So, we need to provide a filter within the calculate function along with the “Rainfall” measure.

Using a measure within a measure

See the above image where I have nested one measure within another. A table and a bar chart are also created to compare the total annual rainfall and Kerala_rainfall just show how the measures are performing.

Row context and filter context of DAX

These two concepts of context are very important for the effective use of DAX. Context refers to the dynamic analysis of the data.

Row context is related to functions while applying filters to identify a single row from the table. In most of the cases, we even dont realize that we are applying the concept of row context.

Filter context is a more complex concept than row context. It applies to narrow down the data. For example, here you can see how the column “SUBDIVISION” of “rainfall_india” has filtered the context and helped us to get the annual rainfall of a particular subdivision.

An overview of Power Query in Power BI

An overview of Power Query in Power BI

Power Query in Power BI plays the role of a data connection technology. It does the data mashup i.e. connect, combine and refine data from many sources to meet the need of our data analysis.

Power Query is available in Excel 2016 or later version of Excel. It can also be added in Excel 2010 as an add-in. It is mainly used for data Extraction-Transformation and Load (ETL) in Excel worksheet or Power BI model.

ETL is something which takes the major portion of time of a data analyst. To ease this task Power Query takes raw data from the source and convert to something more workable form. This form of data is easy to analyze and to draw insights.

Data sources for Power Query

Power Query in Power BI and Excel allows us to extract data from almost any external sources and Excel itself. Here are some examples of the external sources we can bring data from. And there are many more…

Some examples of external sources power query in Power BI can bring data from
Some examples of external sources power query in Power BI can bring data from

After the data has been extracted from the desired source, Power Query helps us clean and prepare the data.

Using Power Query, we can easily append or stack different data tables. We can create relationships by merging different data tables, group and summarize using Pivot feature provided by Power Query.

The beauty of Power Query in Power BI lies in the fact that all this data transformation does not affect the original data set. The data transformation happens in the Power BI memory and we can anytime get back our old data just by removing any particular data transformation step.

Applied Steps can be managed from Query Settings
Applied Steps can be managed from Query Settings

Once we have summarized the data extracted from diverse sources, the report can be refreshed with one click. Every time new data added in the source data folder, Power Query helps us to update the report accordingly with this refresh feature.

Flow of data processing by Power Query in Power BI
Flow of data processing by Power Query in Power BI

The M language and structure of Power Query

The M language is at the core of Power Query. It is the same as the F# language, case sensitive and contains code blocks starting with "let" and "in" as shown below.

let
     <em> variable </em> = <em> expression </em> [,....]
in
     <em> variable </em>

These blocks consists prcocedural steps of declaring and defining variables. Power Query is very flexible with physical position of these logical steps. That means we can declare a variable at the begining of coding and then can define at the last.

But such a type of coding with a different logical and physical structure is very tough to debug. So, unless absolutely necessary, we should maintain the same logical and physical structure of Power Query.

Editing the Power Query

Luckily we don’t need to write the Power Query in Power BI from scratch. It is already written in the background when we perform the data transformation steps. If it is needed we can tweak the Power Query to make desired changes.

First of all, we need to open the data transformation window by clicking the “Transform data” option in Power BI. Then the Power Query can be edited using either the “Advanced Editor” or editing the code for each “Applied Steps” of “Query Settings“.

Editing the Power Query in Power BI
Editing the Power Query in Power BI

The image below consists of an example of Power Query where the data is stored in a variable called “source“. Some other variables are also declared here to store the data with different transformation steps.

The programming blocks of M language

The variables can be of any supported type with a unique name. Only if the variable name contains spaces, then the variable must contain a hashtag in the beginning and enclosed with quotes. It is the protocol of declaring Power Query variables.

How to do forecasting in power bi desktop?

Forecasting in Power BI desktop

Forecasting is predicting the future with the help of present and past data. It uses the concept of Exponential Smoothing to predict the future. The Power BI desktop has a very nifty feature of forecasting. This article will describe the process with practical data.

The data has been collected from Wikipedia using the data scraping feature provided in Power BI. I have described here how you can load the data from the web with this feature.

The data I have collected has several years of information on the monthly and annual rainfall of different regions of India. This data can be used to predict the rainfall of those particular regions for the coming years.

The data

I have selected only a single region including West Bengal and Sikkim with rainfall data from 1901 to 2016. The rainfall has been recorded in mm. On the basis of these many years data, lets try to predict what will be the rainfall of next 5 years.

Here is a glimpse of the data.

Creating the line chart

In order to apply the Forecasting feature in the Power BI desktop, we need to create a line chart first. The line chart option is available in the “Visualizations” pane of the application.

Select the “Line chart” option from visualizations and then select appropriate variables from “Fields“.

The line chart option and variable selection

The next step is selecting the “Year” as the Axis variable and “Annual” rainfall in the Values. Consequently, the line chart will be created.

Creating the Line chart

Forecasting in the Power BI desktop

Now as the line chart is ready we need to create a Forecast for future time points. In “Visualizations” pane, under “Analytics” you can get the option for “Forecast“. But unless your data has at least 60-time points the option will not be available.

Forecast tool in Power BI

Go with the default values of Forecast and click apply. Now forecast for 10 future time points is produced. As in my case each year is individual time point, so forecast for next 10 years will be produced.

The confidence interval is 95% by default. In layman language, out of 100 times the experiment conducted, 95 times the forecast will lie within the interval shown around the forecast values.

Producing forecast with default options

But you can see the forecast produced, does not appear to be very realistic. It has no similarity with the historical trend. So, something is wrong here. The seasonality is left to be selected automatically which is not working in the present case.

Seasonality in Forecasting

We need to provide appropriate value to the “Seasonality“. This parameter is the most important in the case of forecasting. So let’s try to adjust these value to get the most accurate result.

Seasonality in time series forecasting refers to a time period during which the data shows some regular and predictable changes. This period may be weeks, months or years with a cyclic pattern.

Identifying the “Seasonality” in forecasting

This cycle we can identify from the line chart we created. If we closely analyze the line chart and zoom it a little, we can notice the line repeats a pattern every 5-6 years period.

So, I will try to create the forecast with seasonality values close to 5 time points.

Checking accuracy of the forecast

To check the performance of the forecast, the forecast tool of the Power BI desktop has one feature “Ignore last”. It simply help us to produce the forecast leaving the last few points as mentioned in this field.

Which means, for this many time period we have both the observed as well as forecast values. Thus we can compare how precise the forecast is.

If we take seasonality of 4 and 6 time points, the forecast has big differences with the observed ones. See the below images. For example, for the year 2011, the actual rainfall is 2418.70mm and the forecast is 2733.56mm.

Forecast with seasonality 4

Again if we set seasonality as 6, again the forecast is very different from the original value.

Forecast with seasonality 6

But if we provide seasonality as 5 we achieve the best forecast with the closest values to the original rainfall. If we again take the example of the year 2011, with seasonality as 6 time points, the rain forecast is 2337.89mm.

The “Format” option allows us to change the style of the forecast report generated. We can change the confidence interval pattern, line pattern and colour etc.

How to use Goal Seek and solver in Excel 2016?

How to use Goal Seek and solver in Excel 2016

Goal seek and solver in Microsoft Excel 2016 are two very important functions. These two help us to perform some back calculations. Among these two, Goal Seek is the simpler one. So let’s start with the Goal Seek function.

I will demonstrate the use of Goal Seek with a very practical example. Every one of us wants to know the future value of their investment amount. And also how much they should invest to reach their goal.

Here are some useful articles on the use of Power BI to create map visualization, data modelling, web scraping to collect data.

There are lots of online tools available to calculate this. Here we will use the formula for compound interest. We know that our investment either in Bank deposits or the market earns compound interest.

Compound interest means annual interest gets accumulated with the principal amount and next time the interest is calculated on this increased amount.

For example, if we invest Rs 100.00 and gets a 10% interest in the first year, in the next year the principal amount becomes Rs 110.00. So the interest in the next year is 10% of Rs 110.00 that is Rs 11.00, it also gets added with the principal amount (Rs 110.00+11=Rs 121.00) and the process goes on.

See the below screenshot from my excel spreadsheet. It contains the formula for calculating the return from compound interest. Let’s take an example where we assumed an annual interest of 7% and want to know the future value of Rs 10000.00 invested for a period of 10 years.

Use of Goal Seek in Excel 2016

Calculating return on investment

Now what if we want to know how much we need to invest in order to get a return of Rs 25000.00 keeping other conditions same. Here we need to back calculate the investment amount using Goal Seek.

You can find the Goal Seek option in the “What-If Analysis” under the Data tab of Microsoft Excel. It has three fields which we need to fill. See the below figure to understand the process.

We need the Future Value of cell B5 to be 25000. So the “Set Cell” is B5, “To Value” is 25000 and we want to change the value of cell B2.

Using Goal Seek in Excel

Now click “OK” to know the investment amount. Now we know that we need to invest Rs 12708.73 (the cell was not set to 2 decimal places) to become Rs 25000.00 in 10 years with an interest rate of 7%.

Result of Goal Seek

Again if we want to know how many years it takes to make the same amount Rs 10000.00 to Rs 25000.00 with 7% interest. We need to use Goal Seek and will change the cell B4. See the below image, now we know that we have to keep the amount invested for 14 years.

Another example of Goal Seek

But the problem with Goal Seek is that it can not be used for changing more than one variables. It is for simple calculations. If we have a more complex situation and needs several variables to be changed to at the same time, we need to use Solver.

So, lets know how Solver functions with an example.

Use of Solver in Excel 2016

Solver” is not a default application in Excel and comes as an Add-in. You have to add it to make the option appear under the “Data” tab. Follow the steps as shown in the screenshots below to add this Add-in.

Go to “File” and then click “Options” to open Excel Options page.

Opening Excel options

In Excel Options, go to Add-ins and click on “Go...” to open the window containing list of available Add-ins. Now select “Solver Add-in” and click OK.

Activating Solver Add-in

Now check if the Solver Add-in has been added under the Data tab.

Solver Add-in under Data tab

Application of Solver Add-in

Now to see the application of Solver, let’s take another simple yet practical example. Below I have shown a small stock portfolio created in Excel. Here I have calculated the total invested amount of some stocks with some hypothetical prices. As shown the invested amount has been calculated by multiplying the cost with the stocks quantity.

The total amount stands as Rs. 202660.00. But I want to invest Rs 40000.00. So my goal is to calculate the quantity of some stocks lying within a specified range. And the constraints for this calculation are also mentioned in the below image.

The example data set and constrain

Like Goal Seek, Solver also needs to input the “Objective cell” and “Variable cells” where we want to change the values. See the screenshot below where I have shown how to mention the cells as per our requirement. The value field has 40000 as we want to invest Rs 40000.00 in total.

Application of Solver

In the “Add Constraint” field, you need to provide the cell reference and their specific “Constraint“. They need to be provided one by one and finally they are added to the “Subject to the constraints” field. See the image below to understand the process.

Adding constrains in Solver

Now click “Solve” and then if Solver is able to find a solution for your problem, the next window appears where you need to confirm the change by clicking OK.

Now you have the number of stocks and their costs which you can buy within your budget of Rs 40000.00.

Changed values with Solver

I hope the article will help you understand how to use both Goal Seek and Solver in Excel 2016. Please comment below if you have any questions or doubts regarding the article.

How to create new column from existing column Power BI

create new column from existing column Power BI

Create new column from existing column in Power BI is very useful when we want to perform a particular analysis. Many times the new column requires clipping a string part from an existing column.

Such a situation I faced a few days back and was looking for a solution. And this is when I again realised the power of “Power Query“. This article is to share the same trick with you.

If you are new to Power BI, then I would suggest going through this article for a quick idea of its free version, Power BI desktop. It has numerous feature for data manipulation, transformation, visualization, report preparation etc.

Here are some popular application of Power BI as a Business Intelligence tool.

This article covers another super useful feature of Power BI. Adding a new column derived from the existing columns are very common in data analytics. It may be required as an intermediate step of data analysis or fetching some specific information.

For example, we may need only the month or day from the date column, or only the user-id from the email-id list etc.

In this article I will demonstrate the process using the data sets related to India’s state-wise crop production and rainfall data.

Lets start the process step by step and see how I have done this.

Use of “Add Column” and “Transform” options

Power BI desktop offers two options for extracting information from an existing column. Both the options namely “Add column” and “Transform” has different purposes altogether.

create new column from existing column Power BI
Create new column from existing column Power BI

Add column option adds a new column extracting the desired information from the selected column. Whereas the Transform option replaces the existing information with the extracted text.

Here our purpose is to create new column from existing column in Power BI. So let’s explore the “Add Column” feature and the options it offers.

Create new column from existing column Power BI with “Add column” option

First of all, you need to open the “Power Query Editor” by clicking “Transform data" from the Power BI desktop. Here you will get the option “Extract” under the “Add column” tab as shown in the below images.

Extracting the “Length

This option is to fetch the length of any particular string and populate the new column. See in the below example, I have created a new column with the length of the state names from “State_Name” column.

Extracting length
Extracting length

The power query associated with this feature is as given below. The M language is very easy to understand. You can make necessary changes here itself if you want.

= Table.AddColumn(#"Changed Type", "Length", each Text.Length([State_Name]), Int64.Type)

Extracting the “First Characters

If we select the “First Character” option, then as the name suggests, it extracts as many characters as we want from the start. As shown in the below image, upon clicking the option, a new window appears asking the number of characters you want to keep from the first.

As a result, the new column named “First Characters” contains the first few characters of our choice.

Extracting first characters
Extracting first characters

Extracting “Last Characters

In the same way, we can extract the last characters as we need. See the below image, as we select the option "Last Characters", again it will prompt for the number of characters we wish to fetch from the last of the selected column.

Last characters
Last characters

As we have provided 7 in the window asking the number of characters, it has extracted a matching number of characters and populated the column “Last Characters”.

Extract “Text Range

This option offers to select strings from exact location you want. You can select starting index and number of characters from the starting index. See the bellow example where I wished to extract only the string “Nicobar”.

Keeping this in mind, I have provided 12 as the starting index and 7 as the number of characters from the 12th character. The result is the column “Text Range” has been populated with the string “Nicobar” as we wanted.

Extracting text range
Extracting text range

Extracting using delimiters

Another very useful feature is using delimiters for extracting necessary information. It has again three options for using delimiters, which are:

  • Text Before Delimiter
  • Text After Delimiter
  • Text Between Delimiter

The image below demonstrates the use of the first two options. As the column “State_Name” has only one delimiter i.e. the blank space between the words, so in both the cases, I have used this delimiter only.

You can clearly observe differences between the outputs here.

Use of text before delimiters
Use of text before delimiters

The script for executing this operation is given below.

= Table.AddColumn(#"Removed Columns", "Text After Delimiter", each Text.AfterDelimiter([State_Name], " "), type text)

Below is the example where we need to extract text between the delimiters. The process is same as before.

Text between delimiters
Text between delimiters

The code is as below.

= Table.AddColumn(#"Inserted Text After Delimiter", "Text Between Delimiters", each Text.BetweenDelimiters([State_Name], " ", " "), type text)

Another example with different delimiters

Below is another example where you have some delimiter other than the blank space. For example one of the columns in the data table has a range of years. And the years separated with a “-“.

Now if we use this in both cases of text before and text after delimiter, the results are as in the below image.

Use of text before delimiters

Use of “Conditional Column

This is another powerful feature of the Power BI desktop. Where we create a new column applying some conditions to another column.

For example, in the case of the agricultural data, the crop cover of different states have different percentages. And I wish to create 6 classes using these percentage values.

First, open the Power Query Editor and click the “Conditional Column” option under the “Add column” tab. You will see the window as in the below image.

The classes will be as below

  • Class I: crop cover with <1%
  • Class II: crop cover 1-2%
  • Class III: crop cover 2-3%
  • Class IV: crop cover 3-4%
  • Class V: crop cover 4-5% and
  • Class VI: crop cover >5%

See the resultant column created with the class information as we desired.

Use of conditional column
Use of conditional column

Using DAX to create new column from existing column Power BI

We can also use DAX for creating columns with substrings. The option is available under the “Table tools” in Power BI desktop. See the image below.

New column option in Power BI Desktop
New column option in Power BI Desktop

Now we need to write the DAX expressions for the exact substring you want to populate the column with. See the image below, I have demonstrated few DAX for creating substring from a column values.

Comparing to the original values you can easily understand how the expressions work to extract the required information.

Creating column with substrings
Creating column with substrings

Combining values of different columns

This is the last function which we will discuss in this article. Unlike fetching a part of the column values, we may need to combine the values of two columns. Such situation may arise in order to create an unique id for creating keys too.

Likewise here my purpose was to combine the state name and corresponding districts to get a unique column. I have used the COMBINEDVALUES() function of DAX to achieve the goal.

See the below image. I have demonstrated the whole process taking screenshots from my Power BI desktop.

Use of COMBINEDVALUES function
Use of COMBINEDVALUES function

I have tried to cover the steps in a single image. The original two column values as well as the combined value columns are shown side by side so that you can compare the result.

Final words

In this blog I have covered several options available in Power BI desktop for creating new column extracting values from other columns. We frequently face such situation where we need to create such columns in order to get desired analysis results or visualization.

I hope the theory explained along with the detailed screenshots will help you understand all the steps easily. In case of any doubt please mention them in the comment below. I would like to answer them.