I got to participate in GSoC 2021 this year to help for improving the Computer Vision Sample Apps of TensorFlow Lite. The idea behind the project is to improve the sample apps of Computer Vision which uses TensorFlow Lite Task Library as well as TensorFlow Support Library. The main objective is to implement CameraX and update the app so that the developers from the community find it easy to integrate Machine Learning with Android Apps.
I find myself lucky to have them as my mentors.
Link of the Project: https://summerofcode.withgoogle.com/projects/#4931401570320384
GitHub Link: https://github.com/sayannath/GSoC-Project-2021
TF Lite models are lightweight models, production-ready and cross-platform framework for deploying ML models that are used to get inferences on edge devices like mobile phones and microcontrollers.
ML Engineers who are looking for ways to optimize models for deployment purposes.
Let’s take an example of a model which you have created and trained and now you want to make an inference of your model on edge devices like smartphones, raspberry pi and jetson nano.
To get a good prediction at your end your model should pass the following criteria, like
Flutter provides fast development and with a single codebase, we can build apps for multiple platform i.e Android, iOS, Ubuntu, macOS and Windows. It also provides flexibility in terms of building a Custom UI. Hot Reloading also makes the developement process smooth.
Provider is the most popular for state management in Flutter. It is highly recommended for beginners who want to learn state management.
So, you might be wondering what provider?
Provider is a state management helper. It’s a widget that makes some value — like a state model object.
In this article, I will show you how to…
Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such…
In this blog, we will be understanding the concept of weight pruning with Keras. Basically, weight pruning is a model optimization technique. In weight pruning, it gradually zeroes out model weight during the training process to achieve model sparsity.
This technique brings improvements via model compression. This technique is widely used to decrease the latency of the model.
I will be implementing weight pruning in the Fashion MNIST dataset where I have made a comparison between the normal way and the pruning method.
The example which I will be implementing will be required Tensorflow version 2.4 as well as
In the previous article, we have discussed saving our Tensorflow models in TF-Lite format. Now let’s understand why it is important to optimize models.