Enable contrast version

# Tutor profile: Abhinav A.

Inactive
Abhinav A.
Data Scientist at Verizon
Tutor Satisfaction Guarantee

## Questions

### Subject:SQL Programming

TutorMe
Question:

What is table partitioning?

Inactive
Abhinav A.

A relation is made up of fields (columns) and records (rows). These are stored on the disk in the form of B-trees stored in binary format, often a proprietary of the vendor. When the tables are huge in terms of records it is sometimes impossible to save them at one physical location. In order to save them in a single table but across physical locations, a concept called table partitioning is used where data is split horizontally on the basis of a partition key. This column is used to decide which partition a particular record will reside at. If the partitioning is done right, select or filter queries can target only a subset of partitions instead of all, thus greatly increasing query throughput. In Oracle databases, the keyword PARTITION BY does the trick.

### Subject:Python Programming

TutorMe
Question:

What are $$\textbf{Python generators}$$?

Inactive
Abhinav A.

Before answering this, let's quickly reiterate what a function and an iterator are. A function or a method or a subroutine is a piece of code, bundled together as an atomic unit for the purpose of reuse, standardization and/or readability. A function returns to the caller code via the keyword "return". The constraint with a function is that the keyword "return" can be called only once and it gives the output of the function logic back to the piece of code which called it. An iterator, in Python, is an object which implements __iter__ and __next__ method. The first one is being used to initialize the iterator and should return an object which would have implemented the second method. These objects can be used in "for in ..." loops e.g. for l in [1 2, 3, 4, 5] would return in 1, 2, 3, 4, and 5 as the value of l. In order to maintain a function's state (overcoming the constraint of just a single return statement) and avoiding the overhead of implementing __iter__ and __next__ Python has something called as generators. These functions use "yield" in the place of " return" and maintain the function's state even after the control is given back to the caller code. As yield can be used more than once, calling the function more than once can lead to different results defined programmatically. For example, def tutorme_generator(count): if count % 2 == 0: yield count**2 else: yield "Tutor Me allows only even numbers to be squared" count += 1 for i in range(10): print(tutorme_generator(i).__next__())

### Subject:Data Science

TutorMe
Question:

What is Machine Learning?

Inactive
Abhinav A.

Machine Learning is a very careful mixture of 6 different ingredients. The goal of any machine learning problem is to "predict" the output, given an input. 1. The first ingredient is the input i.e. $$\textbf{data}$$. This is gained using crowdsourcing, application traces and even freely available datasets. It can be structured, unstructured or semi-structured; text, video, images, streaming or batch. 2. $$\textbf{Task}$$, explains the motive of the effort being taken by the machine learning problem at hand. It can be a classification of images or generation of new images. Depending upon the task and model, the third ingredient, data is converted to input features. 3. $$\textbf{Model}$$, is the mathematical formulation of the task. This is literally the function which is mapping features to the output. For a model, the rightly curated data is the fuel to generate output as close to latent patterns as possible. Taking this as f(x) = y, the goal is to find out the right coefficients of x so that the output is as close to y as possible across all the data points being used for "training" the model. 4. $$\textbf{Loss}$$, is calculated as a mathematical measure of how far the predicted output of the model was from the real output. The higher the loss, the worse is the choice of the coefficients in f(x), thence worse the model is. The aim is to minimize the loss. 5. For this, machine learning models have a critical step called as $$\textbf{learning}$$. This step changes the coefficients in such a way that the loss is minimized. As this process is iterative, this requires a significant amount of time and compute. When the loss is less than $$\epsilon$$, a very small acceptable loss, "convergence" is reached. 6. Finally, when a machine learning model has its parameters learned, it is ready to predict the output it was learned to do. To calculate the performance of the model, $$\textbf{evaluation}$$ is being done. Accuracy, Precision being some of the very popular evaluation techniques. It is very likely that one might get bamboozled by different jargons, especially when you don't know them. In such a scenario, it would help to remember that any machine learning problem will fit this mold. There are some other advanced topics like checkpointing, hyperparameter tuning, machine learning interpretability etc. but with suitable understanding, even those can be fit in one of these ingredients.

## Contact tutor

Send a message explaining your
needs and Abhinav will reply soon.
Contact Abhinav