Beginner’s Step-by-Step guide to get Kubeflow running in a GCP’s VM with minikube

https://sp.depositphotos.com/stock-photos/timonel.html

Kubeflow’s goal is to make deployments of machine learning (ML) workflows on Kubernetes simple, portable, and scalable. The spirit is to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow. For a glymse at the official documentation go here.

It all sounds great, but it turns out Kubeflow while great is not that light therefore considerable resources are needed to get it up and running. Many of us do not have very powerful machines at home therefore we default to the cloud. We…


credit: https://www.vecteezy.com/vector-art/230978-man-looking-with-binocular

Artificial intelligence (AI) and Machine Learning are here to stay. Paramount to their successful implementation is the amount of training data points. In practice, there is a need for large training datasets (millions of data) in order to arrive at accurate models.

Data in general is abundant. But for ML purposes this data needs to be tagged, that is, has to have labels indicating what the outcome of the eventual ML process will be. This process is expensive. That is the reason, we use Data Augmentation techniques. From an original dataset, we artificially create new tagged samples to ‘augment’ it.


Introductory level explanation with accompanying code snippets to follow along…

Machine Learning academic curriculums tend to focus almost exclusively on the models. One may argue that the model is what performs the magic. The statement may hold some truth, but this magic only works if the data is in the right form. Besides, to make things more complicated, the ‘right form’ depends on the type of model.

Credits: https://www.freepik.com/free-vector/pipeline-brick-wall-background_3834959.htm (*I liked better the MarioBros. image…but you know: copy rights)

Getting the data in the right form is what the industry calls preprocessing. It takes a large chunk of the machine learning practitioner time. For the engineer, preprocessing and fitting or preprocessing and predicting are two distinct processes, but in a production environment…


Ever wanted to predict the future? Humanity has dreamed about it for as long as it exists. Question of the like:

Will this year’s crops be abundant?

Will next winter be mild?

Will this enterprise succeed?

What will the price of gold in one year be?

How many inhabitants will the city have?

In the past, we had soothsayers, pythoness, fortune tellers, priests, and the like. In modern times we have Data Scientists and Engineers using Machine Learning (ML) and Artificial Intelligence (AI). They come with a variety of methods especially suited for temporal data. In ML vocabulary, they have…


If you ever attempted to optimize a machine learning model hyperparameters. You are familiar with the feeling of being digging for gold guided only by your intuition and tales of where other miners found some. If you ask what hyperparameters to use, the experts will tell you:

“use these, they work well”.

Hyperparameter optimization can feel like digging for gold.

Or, in miners jargon equivalent:

“I have heard somebody stroke gold there, go dig close to it”.

Not very satisfying. You decide you are doing things the scientific way. So you make a grid over your parcel and decide to dig in the intersections of the lines. Equivalent…


Not the typical piggyback…but it gets the point across.

Abstract

This paper explores Transfer Learning (TL), a powerful technique in the Deep Learning realm. In Transfer Learning networks architectures, and trained parameters obtained with a particular Dataset are borrowed and used for a new task. We examine different TL approaches for image recognition on the CIFAR-10 data set using pre-trained network architectures on the ImageNet data set. Do not be disguise by the academic tone, this is not an academic paper. There are code snippets on tensorFlow Keras at the end.

Introduction

We, humans, can apply what we have learned while performing a particular task to another loosely related one. From…


Introduction

In this post, we will cover and briefly summarize the article titled ImageNet Classification with Deep Convolutional Neural Networks authored by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton of the University of Toronto and published in Advances in Neural Information Processing Systems 25 (NIPS 2012). The official abbreviation for the conference changed from NIPS to NeurIPS in 2018. NeurIPS is a machine learning and computational neuroscience conference held every December since 1987.

Ilya Sutskever (left), Alex Krizhevsky (centre), Geoffrey Hinton (right) The man who revolutionized computer vision, machine translation, games and robotics

The article describes the architecture and training of a deep convolutional neural network to classify the 1.2 million high-resolution images of the ImageNet LSVRC-2010 contest into the…


In our previous post, we talked about Optimization Techniques. The mantra was speed, in the sense of “take me down -that loss function- but do it fast”. In the present post, we will talk about Regularization Techniques, namely, L1 and L2 regularization, Dropout, Data Augmentation, and Early Stopping. Here our enemy is overfitting and our cure against it is called regularization.


Most used Optimization Techniques explained plus code snippets and additional resources.

Let’s start by making a couple of clarifications. In Machine Learning, the goal is to arrive at a set of Parameters modeling a given problem with reasonable accuracy. This is done via a training process where the mentioned parameters are tested and corrected in a continuous forward and backward fashion. Next to the parameters, one finds a close relative: the Hyperparameters. Unlike their folks, these are set before the training or learning process begins. Their objective is to make the process as efficient, in terms of speed and accuracy, as possible. Adjusting the Hyperparameters is what we call Optimization. …


The public transport service review app.

Couple boarding a taxi

Picture the following situation: you or a loved one are about to board a taxi. Many questions arise such as: is it safe?; how is the service going to be?; does the driver have a bad previous history?; would I trust this person to drive a loved one? ; is it going to be comfortable? is the driver respectful?; is the interior clean? and many mone. In Colombia, with more than 200K registered taxis that deliver between 15 and 20 services a daily, more than 3 Million similar stories take place every day. That is, you, me or someone before…

Santiago Velez Garcia

ML engineer and languages enthusiast.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store