Enable contrast version

Tutor profile: Khushboo T.

Inactive
Khushboo T.
Classroom Mentor at Udacity
Tutor Satisfaction Guarantee

Questions

Subject:Python Programming

TutorMe
Question:

What is the difference between a tuple and a list?

Inactive
Khushboo T.

A tuple is immutable i.e. can not be changed. It can be operated on only. But a list is mutable. Changes can be done internally to it. tuple initialization: a = (2,4,5) list initialization: a = [2,4,5] The methods/functions provided with each types are also different. Check them out yourself.

Subject:Machine Learning

TutorMe
Question:

How to determine K in K-Fold Cross Validation?

Inactive
Khushboo T.

The practical aspects of choosing k are the following: A larger k means you have more folds, which means you are using most of the data for training and using only a small portion for validation in each 'fold run.' This means lower bias and higher variance and larger run time. If you use a smaller k it means you are using more data for validation (and less for training) which may result in large bias as well large variance but smaller run time. A few questions to consider (in case you had not considered before): if you are using k-fold validation the implicit assumption is you don't have enough data to train on (for the complexity of the model needed) so that would mean you prefer a larger k. Usually, people prefer 10 as a larger value may not mean much difference (though I suspect this is dependent on data at hand, and you should check out if this holds true for your data as well) But the bigger questions to ask is this- The model starts from scratch every time you start a new k-fold validation (i.e. if there are 10 fold validation, then for each of the 10 runs the model is trained from scratch and you throw away all the prior learning in each fold.) If this is true then what is really the use of 10-fold validation. We use k-fold validation so that we don't overfit but also don't lose the training data. So how does k-fold validation help when each validation is treated as an independent learning experience. Also, the recent craze is stochastic learning where you randomly pick a small batch of training data and incrementally tune the model (this is different than normal learning in that every record has the same probability of being selected but is not guaranteed to be seleted.) This reduces overfitting. Let me know what you think. I will share mine after you have had time to do some research.

Subject:Artificial Intelligence

TutorMe
Question:

One of the things that I love about AI is that it brings up fascinating discussions, not only about technological abilities and algorithms but about philosophy and culture. As we create artificial intelligence, we almost necessarily have to reflect on our own intelligence. So, to get us started: what is intelligence, to you?

Inactive
Khushboo T.

In natural language terms, something is intelligent if its behaviors map to a location nearby in the embedding/vector space in my head to other things that I've seen called intelligent before. It's not an accident that language models, word embeddings, etc. and the knowledge we have about how you can learn relations as a vector space model with neural nets seem to match how natural language categories work, i.e. in Wittgenstein's famous family resemblances1 example. So in a sense, when people try to say "that's intelligent" or "that's not intelligent", you won't find an exact definition that will satisfy any particular other thing they might say is intelligent. And if people have been "trained" on different examples of things that are intelligent, they might also disagree that a thermostat, or a member of an opposing political party, etc. are intelligent. Of course, we're not bound by natural language and if we want to do any meaningful math or programming we'll move pretty far away from it fairly quickly. Rather than get caught up in "is this what the word intelligence really means?" questions it makes more sense to look at formal properties we can assign to humans that machines can emulate, or properties we can assign to machines that are better than ones we can assign to humans. I mean personally I don't think my brain uses an admissible heuristic for route-finding, but it seems like it would be a fairly smart thing to add if I could. When people talk about getting into general artificial intelligence, or real artificial intelligence, I really think they're talking about conscious experience and general skills. Ok, so at least one of the things that AI textbooks say AI is not really about, but if you list out, for example: - the ability to carry on a conversation with an intelligent agent (e.g. a human) that convinces the other intelligent agent that the AI is conscious. - the ability to write any general computer programmer a skilled human could write from a natural language description. I think those are two goals that any AI artisan would be happy to achieve. Maybe the second one's not super far off. It's interesting to consider why a human being is capable of doing either. Certainly not every single human being is capable of either or both. But as a species with a collective history, generic tool use and language have figured prominently, and the flexibility of language systems and our language capacity to be applied to other domains (e.g. music, written word, mathematics). The ability to write programs, well it's a huge extension of those capacities. But in many ways the first is more interesting to me. We carry in our head our own tiny models of other people, how they think, what they do, and this perspective taking component, well, it seems a little circular (but only incrementally so, if you think evolutionarily), maybe being conscious is a requirement to model other conscious things. That is, assuming others here are conscious and not philosophical zombies. Another way to state that problem though, if one thought the impeccable appearance of consciousness implied consciousness, is why would a machine need to be conscious? It strikes me as very puzzling that our consciousness seems to be directly tied to tasks that we perform that are the most, rather than the least Turing machine-like. What I mean is, the brain acts like a slow serial machine. Anyways, that's kind of a series of quick notes on several topics related to intelligence in general and AI that have been stewing in my brain a bit recently. I'm not overly attached to any of these positions and would love to hear the thoughts other have.

Contact tutor

Send a message explaining your
needs and Khushboo will reply soon.
ContactÂ Khushboo