First paper from the new gig: “Combinatorial Energy Learning for Image Segmentation”, http://arxiv.org/abs/1506.04304
This is just a fraction of what is new since moving to Google, so hopefully a broader update to follow at some point…
First paper from the new gig: “Combinatorial Energy Learning for Image Segmentation”, http://arxiv.org/abs/1506.04304
This is just a fraction of what is new since moving to Google, so hopefully a broader update to follow at some point…
Nexus by Ramez Naam is a near-term science fiction novel about the evolution and implications of neurotechnology (in particular, two-way communication with the brain at cellular resolution). The book is a fun and easy going read, likely to be thought-provoking for any scientist or technologist, and strikes a reasonable balance between extrapolation and speculation in predicting future technologies and capabilities.
Nexus also motivated me to revisit some related classics in the genre; I find it impressive how much the early novels by William Gibson (for example, Neuromancer, published in 1984) remain spiritually in tune with some present day views on what advances in computation and neuroscience may lead towards. It’s unfortunate Gibson’s more recent novels are not nearly as compelling or far-reaching in their ambition.
Please send us any comments you have!
According to Google Scholar statistics, arXiv’s repository of “learning” papers (cs.LG) now constitute the 10th highest impact “journal” in the field — placing it above fairly well-known venues such as Machine Learning and Neural Computation. AI-related fields such as machine learning and computer vision are still largely driven by refereed conference proceedings, but assuming the underlying numbers are correct (and to be honest, the h5-median numbers seem rather suprisingly high) this is still a fairly noteworthy and probably positive development for academic publishing practices in computer science.
The dynamics are, of course, hard to predict — as submitting to arXiv becomes more popular, its possible that the “signal to noise” ratio will deteriorate and these statistics will consequently suffer. But with non-standard refereeing mechanisms now widely in place (like blog posts 😉 ) and near-real-time citation tracking (thanks again, Google Scholar), fluctuations in median quality may not seriously threaten the utility of such repositories.
A collaboration of scientists from Drexel, Penn, Princeton, and UCLA recently announced they found grid cells in the brain of humans. The scientists measured human brain activity by opportunistically performing (highly invasive) electrode recordings in the brains of patients who were undergoing brain surgery for severe epilepsy. Grid cells form a regular lattice in a particular region of the brain and this lattice can be directly related to their functional properties of encoding position in euclidean space (and more speculatively, perhaps position even in more abstract types of mental space).
Neuroscience can seem very slow moving and sometimes repetitive (grid cells were, of course, previously identified in other species). But if you step back a bit, it’s surprising and impressive to recognize that we are starting to generate measurements, and mechanistic hypotheses, for the basic organization of some significant parts of human thought.
(“Direct recordings of grid-like neuronal activity in human spatial navigation” Jacobs et. al, Nature Neuroscience).
Netflix has publicly released an internal tool called “Chaos Monkey,” a service that will basically do you the favor of occasionally killing your cloud-based processes in various ways. The goal is to encourage robustness of an overall platform to individual program failure by persistently incorporating failure into the software development life-cycle.
More generally — do large, complex, and evolved systems inevitably require mechanisms for frequent self-termination at the component level? This is a principle which biology, of course, fully exploits.
Nick Weiler from Stanford has written a great summary describing how modern neuroscience thinks about the “canonical cortical circuit” and how recent results from Randy Bruno’s lab at Columbia provide some slight modifications to the traditional view.
Scott Aaronson has an interesting new essay online about whether there is a notion of ‘free will’ compatible with modern physics. The argument has a couple of key elements:
Aaronson seems to conclude that free-bits offer likely the only route by which human behavior is in principle unpredictable. (Of course in practice there are many technical difficulties, but thats not the issue). Entertaining stuff!
Impressive how much information about faces can be conveyed with relatively few bits (in this case, 32*32). (Justification for the machine learning community spending so many years on datasets like MNIST?).