Author name: Rishabh

But What Are Word Embeddings?

Simple. Neural Networks only understand numerical data. Their weights, their biases, and their training, all take place using numerical values, floats, and integers, yada yada. Now when we want to work with words and other textual data, we need a way to convert them to numerical representation, that’s where word embeddings come under the picture. …

But What Are Word Embeddings? Read More »

What is Federated Learning

With the concerns about data privacy rising and with the rise of AI models, people are increasingly concerned about their data being misused for purposes beyond their consent. As we know that data is the new oil of this century, it is a very sought after commodity. Seeing these concerns, Google introduced the concept of …

What is Federated Learning Read More »

Universal Backdoor Attacks

Today I will be discussing my understanding of the paper Universal Backdoor Attacks. The paper delves into another exciting exploit that can be leveraged against popular convolutional models such as Resnets. What is a Universal Backdoor A backdoor is an alternate entry to your house. In the field of computers and security in general, it …

Universal Backdoor Attacks Read More »

BIM: Advanced FSGM Attack

Previously we talked about Fast Sign Gradient Method( FGSM), we saw how this white box technique, cleverly exploits the gradients in a model, to perturb the input to give the wrong prediction from the model. Since, in this method, we perturb our input just once, a modified version of this attack does so repeatedly for …

BIM: Advanced FSGM Attack Read More »

Adversarial Attacks

In this post, we will be talking about the vulnerabilities that plague machine learning. Yes, in the realm of computer science, no field is void of vulnerabilities and loopholes and as we progress towards a very AI-based future, the security and robustness of machine learning models become an important aspect. What are Adversarial Attacks? The …

Adversarial Attacks Read More »

Gradient Descent

The special ingredient to machine learning. We learned in the last post about Linear Regression. We concluded with a cost function that we needed to minimize. Today we will see how we minimize this cost function. To recap, the cost function was : Here, h𝚯(x) is the linear regression equation that we discussed earlier( y= …

Gradient Descent Read More »

Scroll to Top