Around 55 min into the episode, Pat described one of the possible roles of a philosopher re. the sciences is "the analogue of doing theoretical physics," and she mentioned Chris Eliasmith as a paradigm example of this. He's the Director for the Centre for Theoretical Neuroscience at the University of Waterloo.
I quote from their web site:
Theoretical neuroscience is the quantitative study of neurobiological systems using the tools of information theory, signal processing, control theory, machine learning, and dynamic systems theory. It is concerned with issues of neural representation, neural architecture, learning, nonlinear systems, and complexity as they relate to understanding the uniquely flexible and effective behaviours of humans and animals.
Eliasmith is the laboratory head for the Computational Neuroscience Research Group, which is:
...dedicated to developing and using a unified mathematical framework for modeling large-scale neurobiological systems. We are currently applying this framework to specific projects in sensory processing, motor control, and cognitive function. Our on-going work encompasses purely theoretical issues, specific biologically realistic models (e.g., of Parkinson's Disease, hemineglect, human linguistic inference, rodent navigation, among others), and practical applications (e.g., automatic text classification, clustering, and data mining). These modeling efforts are carried out in collaboration with various experimental groups who use techniques that span the range from single cell physiology to fMRI.
So this is a potential role for a philosopher much the way that philosophers learn high-level formal logic and get drafted to figure out algorithms for helping computers simulate linguistic understanding. You're not going to get the skills to do this merely by reading Hume and Descartes and the like. Drilling down into the group's list of research topics, under "philosophy," we find:
Recent work in philosophy of mind has focused on:
1. understanding what kind of computer the brain is
2. considering what the best kind of architecture is for understanding cognitive function
3. considering the relationship between models and theories in science generally (with consideration of neuroscience specifically)
4. characterizing mental representation in a neurally informed way
Here's the abstract for one of Eliasmith's papers, "Normalization for probabilistic inference with neurons:"
Recently, there have been a number of proposals regarding how biologically plausible neural networks might perform probabilistic inference... To be able to repeatedly perform such inference, it is essential that the represented distributions be appropriately normalized. Past approaches have considered normalization mechanisms independently of inference, often leaving them unexplored, or appealing to a notion of divisive normalization that requires pooling across many neurons. Here, we demonstrate how normalization and inference can be combined into an appropriate connection matrix, eliminating the need for pooling or a division-like operation. We algebraically demonstrate that such a solution is available regardless of the inference being performed. We show that such a solution is relevant to neural computation by implementing it in a recurrent spiking neural network.
"Normalization" is a database term, and I assume it means the same thing here as in that context: it's about some kind of removal of redundancy in data representations, which in this case means (I think) that the brain is storing information compactly to support quick and efficient operations.
As with my recent post on applied ontology, I'll say that this sounds too suspiciously like work to me to satisfy the urges that keep me interested in philosophy. Still, if you have a philosophy background and want to be contributing to scientific advancement and not just improving your own humanistic/religious understanding of existence, and you don't mind getting very technical, then here's a potential avenue to pursue.