Get detailed into miracle’s of quantization
Whenever I work on deep learning projects to train a model and make it ready for production by saving the model, it gives me huge memory. Then I started researching to decrease the saved model memory and here I found a name called “Quantization”. I would like to explain more about this quantization with some code samples and the theory behind this.
There are two forms of quantization:
Start with post-training quantization since it’s easier to use, though quantization aware training is often better for model accuracy.
The complete story behind my developer certificate.
I received my TensorFlow Developer Certificate!!
Initially, when I started working on machine learning projects, I felt cool learning and implementing those algorithms on related problems. But while coming to neural networks, it was hard to implement them for solutions due to the vastness in the subject of neural networks. So, Started using TensorFlow which made time to grab the things inside this API. Somewhere I heard If you see things are complicated then start them in simple. I did the same by starting basic syntax and knowing more about the API. …
Basic intuitions of Neural networks
When I started reading articles on neural networks, I faced a lot of struggles to understand the basics behind neural networks and how they work. Start reading more and more articles on the internet, grab those key points, and put them together into private notes for me. And, I thought to publish them for better understandings to others.
It would be fun to know the basics of any domain.
The perceptron is one of the simplest ANN Architectures, invented in 1957 by Frank Rosenblatt. It is a slightly different artificial neuron called TLU (Threshold Logic…
The toughest job in the field of Data Science is training DNN
For the past two years, I’m working in the Data Science field which always makes me learn and know better in the field. While coming to DNN, I feel more comfortable working and gather tricks to make neural networks run faster and get the best accuracy for my model. Through this article, I spread those tricks to make the toughest job into a simplified job.
Training DNN’s is a tough job to complete due to computational time and effort. …
In detail with Bag of words, TF_IDF, RNN’s, GRU’s & LSTM’s.
Yes! Alexa is designed from NLP modelling.
But, how NLP works and how it is designed for modelling?
In this article, I will prepare data for NLP modelling. Moreover, I would explain everything in detail both in theory and code.
Natural Language Processing is the model of building to interact with machines in human languages. It is the subfield of linguistic, Computer Science & AI.
NLP is nothing but designing systems to interact by human commands. As commands are only text format, then model needs to be trained on…
Trouble-Free Concept in 5 min.
It is model to predict the future values based on previous observed values. It has certain data points at every time steps.
Time Series widely used in non-stationary data such as
Stock Prices, Weather predictions , Economics, Retail sales etc…
So, what these aspects represents ??
Stationarity in time series is important and a timeseries is said to be stationary only if its statistical properties doesn’t change overtime. And most importantly maintains constant mean and variance.
Stationarity : It’s behaviour doesn’t change overtime.
Non- Stationarity : It’s behaviour changes overtime.
Teaching or Training the machines to perform and predict the outcomes.
Types of machine learning
Takes features and labels in training dataset to learn and predict the outcomes.
Contains only features and extracts the labels using features and patterns.
from sklearn.linear_model import LinearRegression
model = Linear_Regression()
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
from sklearn.naive_bayes import…