Enable contrast version

Tutor profile: Cybelle S.

Inactive
Cybelle S.
Postdoc in Psychology at UPenn, PhD Psychology, MS Applied Statistics
Tutor Satisfaction Guarantee

Questions

Subject: Psychology

TutorMe
Question:

What are some common experimental methods for answering questions in human Cognitive Neuroscience, and what are their strengths and drawbacks?

Inactive
Cybelle S.
Answer:

Cognitive Neuroscience methods can be used to detect correlations between brain activity and behavior, and can also be used to inform our understanding of the causal role that a particular brain region plays in a particular cognitive function or behavior. Common "correlational" methods used in humans include electroencephalography (EEG), electrocorticography (ECoG, pronounced like the name of the letter "E" then "cog"), functional magnetic resonance imaging (fMRI), and optical imaging (which includes fNIRS = functional near-infrared spectroscopy and EROS = event-related optical signal). These methods vary in terms of temporal resolution, spatial resolution, and invasiveness. fMRI measures slow changes in blood flow (the hemodynamic BOLD response), and has good spatial resolution but poor temporal resolution. EEG measures voltage potentials at the scalp, and has good temporal resolution but poor spatial resolution. ECoG measures voltage potentials using electrodes implanted under the skull, and has good spatial and temporal resolution, but is invasive and difficult to acquire, is only used for patients (often with epilepsy), and usually has limited coverage of the brain (since electrode placement is determined only due to health considerations). Common methods for determining whether a brain region plays a causal role in contributing to a cognitive function or behavior include transcranial magnetic stimulation (TMS), and studies of patients with brain lesions due to stroke or pathology. TMS works by using a strong magnetic field to temporarily stimulate different regions in the brain, and can be used on healthy individuals, making it a powerful method for determining whether a specific region plays a role in a cognitive function. One sophisticated method for analyzing data from lesion patients is called "voxel-based lesion-symptom mapping," which determines the degree to which damage to each of many tiny parts of the brain (3-D pixels or "voxels") contributes to cognitive impairment on some task. Both TMS and lesion studies can also be used to test if two cognitive processes are dissociable. The cognitive processes are found to be dissociable if a pattern is detected called a "double dissociation." In a "double dissociation," damage or stimulation to one brain region impairs (or enhances) performance on cognitive task A but not B, and damage or stimulation to another region impairs (or enhances) performance on cognitive task B but not A. If damage to one region impairs performance on cognitive task A but not B, it could simply be because task A is more difficult than task B, so the same underlying cognitive process could drive performance on both tasks.

Subject: Statistics

TutorMe
Question:

What is the difference between a fixed and random effect in a linear regression model, and when would I choose to use a random effect?

Inactive
Cybelle S.
Answer:

Fixed effects are used to model the core predictors that you care about in your model, when you care about knowing what the effect size is for each specific predictor. For example, you want to know how much people enjoy Pepsi, Coke and generic Cola, so you have people do a blind taste test, and rate how much they enjoyed each soda from 1 to 5. In your model, "soda type" would be included as a (discrete) fixed effect to predict the score each person gave for each soda on the taste test. Random effects are used to model a distribution of predictors (often that are peripheral to your question of interest), and are included to ensure that your model will generalize across a particular population. For example, you know that some people simply like soda more or less than others, and the score they assign to all the sodas will be higher or lower because of this. You want your model to capture the risk that, if you collected data from a bunch of new people, there is a chance they might all hate soda or all love it and provide very different overall scores from your initial sample. In other words, you want to be sure that when you tell your boss "this is the score people typically assign to sodas overall," the error bars you put around your estimate are appropriately large. To do this, you should include a random effect of person. This random effect will add one additional parameter to your model: the standard deviation of a normal distribution (this normal distribution would be centered on the mean rating over the whole data set, so there is no extra parameter for the mean). You are basically saying, for each person in your study, the overall amount they like soda is drawn randomly from a normal distribution. If you had fitted each person with a fixed effect, this would be way more parameters (as many as the people that participated, minus 1), which is bad for statistical power (more on this below). If you are curious how much a given person liked soda, but used a random effect for person in your model, you can still come up with an estimate for each person using something called Best Linear Unbiased Predictors (BLUPs). Including random effects when appropriate, rather than always using a fixed effect, is important for most real-world applications. This is because it not only ensures better generalization of your model to new data, but (relative to using fixed effects to model nuisance predictors) will improve your statistical power to answer the questions you care about. This is because your model will have fewer parameters and save degrees of freedom. ("Power" refers to the probability that you will detect an effect that exists in the world given the way the data was collected and the way you chose to model it, so more power is a good thing. "Degrees of freedom" are like the currency that your data provide for answering statistical questions -- you spend them by adding more parameters to your model, and the more you spend, the less power you have to answer your question, all else being equal). Bonus Lesson -- Random Slopes: Say you want to tell your boss, "in general people like Coke better than Pepsi." Since you want your statement to generalize to new samples of people, you'll now want to model differences across people in how much they prefer each soda to the others (rather than just how much they like soda overall). Thus, you would add an additional random effect, often called a "random slope." In the 3 soda case this means that you would now model three different normal distributions, instead of one - you would model "overall amount a person likes soda" (your random intercept), "overall amount a person likes Coke better than the mean of the sodas" (random slope number 1) and "overall amount a person likes Pepsi better than the mean of the sodas" (random slope number 2). Why is there no random effect for how much more each person likes generic Cola than the mean? This is because, like for fixed effects, the number of predictors you need to model differences in a discrete category (like soda) is the number of categories minus 1. In this case, there are 3 categories of soda, so we would only need 3 - 1 = 2 random slope variables. You would also, in most cases, model the covariance among the random predictors, so your model will have a few more parameters (in this case, 1 random intercept variance parameter + 2 random slope variance parameters + 3 covariance parameters = 6 total). In many real-world applications, it is best practice to include both random intercepts and random slopes. A paid analyst working on the soda study example would almost certainly include both a random intercept and random slopes in their regression model, unless the random effects turn out to explain very little of the variance (people don't differ much from each other). The only other exception is when there is not enough data to fit all the random variables you would like to, and your model "fails to converge," in which case you could scale back the model, by eliminating some of the covariance parameters (fixing them at 0) and/or eliminating random slopes.

Subject: Cognitive Science

TutorMe
Question:

Explain the difference between dualism and monism.

Inactive
Cybelle S.
Answer:

"Dualism" and "monism" both refer to theories of the mind-body connection. "Dualism" is the belief that the mind is separate from the body, while "monism" suggests that the two are inextricably intertwined. René Descartes (French, 17th Century philosopher) was an influential dualist who believed that the mind is immaterial but can interact with the body via the pineal gland (the "seat of the soul"). Many modern monists are "physicalists" who essentially suggest that the mind is physically implemented in the brain.

Contact tutor

Send a message explaining your
needs and Cybelle will reply soon.
Contact Cybelle

Request lesson

Ready now? Request a lesson.
Start Lesson

FAQs

What is a lesson?
A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard.
How do I begin a lesson?
If the tutor is currently online, you can click the "Start Lesson" button above. If they are offline, you can always send them a message to schedule a lesson.
Who are TutorMe tutors?
Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you.