# Tutor profile: Aabir A.

## Questions

### Subject: Python Programming

What is the difference between a list and a tuple?

Python allows users a great deal of freedom with its data structures, and this is something that a lot of Python loyalists are huge fans of. For example, variables in Python do not have to be declared, and can take up any datatype available at any point in the program. However, any programming language must restrict freedom at some point in order to stay consistent and secure. Python has therefore distinguished between tuples and lists. A list in Python is certainly similar to a tuple in many ways. They both can hold arbitrary numbers (within the memory limit) of values. Each value can be of an entirely different datatype. Indexing and accessing values works the same way in both cases (list_name[index}). However, a list is a $$\textbf{mutable}$$ datatype, while a tuple is not. This simply means that a list can be modified. Values can be inserted into, deleted from, or shuffled about $$\textbf{within the same list}$$. However, with a tuple, values can only be packed, unpacked, and individually accessed. Python thus accepts arguments to functions as tuples, not lists. These individual arguments can, of course, be lists, tuples, or anything else - however, the $$n$$ arguments themselves are passed as an immutable tuple, not as a list, to the interpreter. Similarly, while returning function output, Python restricts the ability to play with the structure of the output, treating output as tuples and not lists.

### Subject: Machine Learning

What does the Shannon Entropy have to do with information?

The Shannon Entropy for a probability distribution is given by: $(E = \sum_{\forall p} p \cdot log(p)$) This is actually a beautiful equation that represents the expected degree of 'surprise' quantified by a probability distribution. To understand this, think of two events, A and B, whose 'surprisingness' we'd like to measure as quantities. Let their 'surprise values' be $$a$$ and $$b$$ respectively. So: $(Surprise(A) = a \\ Surprise(B) = b$) Now, if A and B $$\textbf{both}$$ occur, that would occur with a lower probability than either of A or B occurring. In fact, this probability would be the product probabilities of A or B occurring independently. Accordingly, we should be surprised by a $$\textbf{multiplicative}$$ factor, not an additive one, if A and B occur together. So $$ Surprise( A \cup B) = a \cdot b$$ This basically sets up the rule: $$ Surprise(A \textbf{ and } B) = Surprise(A) \cdot Surprise(B)$$ And this property is well-captured by the function $$log$$, as: $$ log(ab) = log(a) + log(b)$$ This explains why the Shannon Entropy (used even in thermodynamics) is a good measure of surprise of a probability distribution. Now, the probability distribution is itself a measure of the spread of the data you have. And the more surprising the spread, the more information is contained in it! For example, if all of you friends like the same TV show, there's very little 'spread' in the data. Your choice is not 'surprising' in the context of each other's choices. On the other hand, if you all like very different shows, there is more 'surprise' in each of your choices, and thus, more information in the set of your choices! Thinking about measures of information using the Shannon Entropy are extremely useful in Machine Learning as they enable us to develop intelligent ways of discarding or retaining information from a dataset. This is a feature of the tSNE algorithm that is widely used for dimensionality reduction.

### Subject: Physics

What are the analogies between rotational mechanics and translational ones?

Students often find it relatively easy to deal with translational mechanics. We're familiar with the concepts of pushing and pulling things, and of thinking and interacting in 3 dimensions. We can also relate to how displacement naturally follows vector addition laws - the result of 2 step forward, one step to the right, and one step upwards are quite intuitive to us. Once students are introduced to the mathematics, force as the time derivative of momentum also starts to seem quite natural, and students are usually able to then build up concepts like the conservation of linear momentum, the conservation of energy and so on. The amazing thing about these concepts is that there is no real restriction to translational motion. A displacement (going forward) in translation is analogous to an angular shift (turning) in rotation. It does take some time to get used to the idea that an object that's rotating might 'look' the same with different magnitudes of rotation (360, 720 degrees and so on) - but this does not change the fact that an object that rotated once (360 degrees) has done something different from one that has rotated twice (720 degrees). Much in the same way that an object that goes 1 step forward is different from one that goes 2 steps forward. Now, what about force? Force tends to turn an object around an axis more if it is farther from that axis - so the natural analog to force is torque, a 'vector' or 'directed quantity' that we actually directly perpendicular to the plane of the rotation. Why this weird choice of direction, you ask? Because it wouldn't make sense to represent a rotating behaviour by the direction it's currently moving in! That direction keeps changing! We'd be better off thinking of it as a quantity perpendicular to the plane of its motion - a direction it never moves in! And once we have this direction, we can think of adding rotations along different directions (just like displacements before), and the math works out just fine! Similarly, this builds up to concepts like the conservation of *rotational* energy, and the conservation of *angular* momentum, both of which are extremely similar to the linear equivalents we discussed. I firmly believe that any student who understands linear mechanics but struggles with rotational mechanics is just a few short thinking exercises away from mastering both in synergy!

## Contact tutor

needs and Aabir will reply soon.