TF Lite models are lightweight models, production-ready and cross-platform framework for deploying ML models that are used to get inferences on edge devices like mobile phones and microcontrollers.
ML Engineers who are looking for ways to optimize models for deployment purposes.
Let’s take an example of a model which you have created and trained and now you want to make an inference of your model on edge devices like smartphones, raspberry pi and jetson nano.
To get a good prediction at your end your model should pass the following criteria, like
Flutter provides fast development and with a single codebase, we can build apps for multiple platform i.e Android, iOS, Ubuntu, macOS and Windows. It also provides flexibility in terms of building a Custom UI. Hot Reloading also makes the developement process smooth.
Provider is the most popular for state management in Flutter. It is highly recommended for beginners who want to learn state management.
So, you might be wondering what provider?
Provider is a state management helper. It’s a widget that makes some value — like a state model object.
In this article, I will show you how to…
In this blog, we will be understanding the concept of weight pruning with Keras. Basically, weight pruning is a model optimization technique. In weight pruning, it gradually zeroes out model weight during the training process to achieve model sparsity.
This technique brings improvements via model compression. This technique is widely used to decrease the latency of the model.
I will be implementing weight pruning in the Fashion MNIST dataset where I have made a comparison between the normal way and the pruning method.
The example which I will be implementing will be required Tensorflow version 2.4 as well as
In the previous article, we have discussed saving our Tensorflow models in TF-Lite format. Now let’s understand why it is important to optimize models.
I am amongst the top contributors in Github from India currently, my rank is #136. I am an aspiring Junior Data Scientist at Codebugged AI.