Feed aggregator

As brain extracts meaning from vision, study tracks progression of processing

MIT News - 1 hour 12 min ago

Here’s the neuroscience of a neglected banana (and a lot of other things in daily life): Whenever you look at its color — green in the store, then yellow, and eventually brown on your countertop — your mind categorizes it as unripe, ripe, and then spoiled. A new study that tracked how the brain turns simple sensory inputs, such as “green,” into meaningful categories, such as “unripe,” shows that the information follows a progression through many regions of the cortex, and not exactly in the way many neuroscientists would predict.

The study, led by researchers at MIT’s Picower Institute for Learning and Memory, undermines the classic belief that separate cortical regions play distinct roles. Instead, as animals in the lab refined what they saw down to a specific understanding relevant to behavior, brain cells in each of six cortical regions operated along a continuum between sensory processing and categorization. To be sure, general patterns were evident for each region, but activity associated with categorization was shared surprisingly widely, say the authors of the study published in the Proceedings of the National Academy of Science.

“The cortex is not modular,” says Earl Miller, Picower Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT. “Different parts of the cortex emphasize different things and do different types of processing, but it is more of a matter of emphasis. It’s a blend and a transition from one to the other. This extends up to higher cognition.”

The study not only refines neuroscientists’ understanding of a core capability of cognition, it also could inform psychiatrist’s understanding of disorders in which categorization judgements are atypical, such as schizophrenia and autism spectrum disorders, the authors said.

Scott Brincat, a research scientist in Miller’s Picower lab, and Markus Siegel, principal investigator at the University of Tübingen in Germany, are the study’s co-lead authors. Tübingen postdoc Constantin von Nicolai is a co-author.

From seeing to judging

In the research, animals played a simple game. They were presented with shapes that cued them to judge what came next — either a red or green color, or dots moving in an upward or downward direction. Based on the initial shape cue, the animals learned to glance left to indicate green or upward motion, or right to indicate red or downward.

Meanwhile the researchers were eavesdropping on the activity of hundreds of neurons in six regions across the cortex: prefrontal (PFC), posterior inferotemporal (PIT), lateral intraparietal (LIP), frontal eye fields (FEF), and visual areas MT and V4. The team analyzed the data, tracking each neuron’s activity over the course of the game to determine how much it participated in sensory vs. categorical work, accounting for the possibility that many neurons might well do at least a little of both. First they refined their analysis in a computer simulation, and then applied it to the actual neural data.

They found that while sensory processing was largely occurring where classic neuroscience would predict, most heavily in the MT and V4, categorization was surprisingly distributed. As expected the PFC led the way, but FEF, LIP and PIT often showed substantial categorization activity, too.

“Our findings suggest that, although brain regions are certainly specialized, they share a lot of information and functional similarities,” Siegel says. “Thus, our results suggest the brain should be thought of as a highly connected network of talkative related nodes, rather than as a set of highly specialized modules that only sparsely hand-off information to each other.”

The patterns of relative sensory and categorization activity varied by task, too. Few neuroscientists would be surprised that V4 cells were particularly active for color sensation while MT cells were active for sensing motion, but perhaps more interestingly, category signals were more widespread. For example, most of the areas were involved in in categorizing color, including those traditional thought to be specialized for motion.

The scientists also note another key pattern. In their analysis they could discern the dimensionality of the information the neurons were processing, and found that sensory information processing was highly multi-dimensional (i.e. as if considering many different details of the visual input), while categorization activity involved much greater focus (i.e. as if just judging “upward” or “downward”).

Cognition in the cortex

The broad distribution of activity related to categorization, Miller speculates, might be a sign that when the brain has a goal (in this case to categorize), that needs to be represented broadly, even if the PFC might be where the judgement is made. It’s a bit like in a business where everyone from the CEO down to workers on the manufacturing floor benefit from understanding the point of the enterprise in doing their work.

Miller also says the study extends some prior results from his lab. In a previous study he showed that PFC neurons were able to conduct highly-multidimensional information processing, while in this study they were largely focused on just one dimension. The synthesis of the two lines of evidence may be that PFC neurons are able to accommodate whatever degree of dimensionality pursuing a goal requires. They are versatile in how versatile they should be.

Let all this sink in, the next time you consider the ripeness of a banana or any other time you have to extract meaning from something you perceive.

The work was supported by National Institute of Mental Health, European Research Council, and the Center for Integrative Neuroscience.

Categories: In the News

Lesley University buys divinity school campus, allowing for expansion near Harvard Square

Cambridge Day - 4 hours 30 min ago
Lesley University has purchased the remaining 4.4 acres of the Episcopal Divinity School property near Harvard Square, including five historic buildings and the remaining half of a library that the university had shared with the divinity school, it was announced Wednesday.
Categories: In the News

Light-controlled polymers can switch between sturdy and soft

MIT News - 5 hours 17 min ago

MIT researchers have designed a polymer material that can change its structure in response to light, converting from a rigid substance to a softer one that can heal itself when damaged.

“You can switch the material states back and forth, and in each of those states, the material acts as though it were a completely different material, even though it’s made of all the same components,” says Jeremiah Johnson, an associate professor of chemistry at MIT, a member of MIT’s Koch Institute for Integrative Cancer Research and the Program in Polymers and Soft Matter, and the leader of the research team.

The material consists of polymers attached to a light-sensitive molecule that can be used to alter the bonds formed within the material. Such materials could be used to coat objects such as cars or satellites, giving them the ability to heal after being damaged, though such applications are still far in the future, Johnson says.

The lead author of the paper, which appears in the July 18 issue of Nature, is MIT graduate student Yuwei Gu. Other authors are MIT graduate student Eric Alt, MIT assistant professor of chemistry Adam Willard, and Heng Wang and Xiaopeng Li of the University of South Florida.

Controlled structure

Many of the properties of polymers, such as their stiffness and their ability to expand, are controlled by their topology — how the components of the material are arranged. Usually, once a material is formed, its topology cannot be changed reversibly. For example, a rubber ball remains elastic and cannot be made brittle without changing its chemical composition.

In this paper, the researchers wanted to create a material that could reversibly switch between two different topological states, which has not been done before.

Johnson and his colleagues realized that a type of material they designed a few years ago, known as polymer metal-organic cages, or polyMOCs, was a promising candidate for this approach. PolyMOCs consist of metal-containing, cage-like structures joined together by flexible polymer linkers. The researchers created these materials by mixing polymers attached to groups called ligands, which can bind to a metal atom. 

Each metal atom — in this case, palladium — can form bonds with four ligand molecules, creating rigid cage-like clusters with varying ratios of palladium to ligand molecules. Those ratios determine the size of the cages.

In the new study, the researchers set out to design a material that could reversibly switch between two different-sized cages: one with 24 atoms of palladium and 48 ligands, and one with three palladium atoms and six ligand molecules.

To achieve that, they incorporated a light-sensitive molecule called DTE into the ligand. The size of the cages is determined by the angle of bonds that a nitrogen molecule on the ligand forms with palladium. When DTE is exposed to ultraviolet light, it forms a ring in the ligand, which increases the size of the angle at which nitrogen can bond to palladium. This makes the clusters break apart and form larger clusters.

When the researchers shine green light on the material, the ring is broken, the bond angle becomes smaller, and the smaller clusters re-form. The process takes about five hours to complete, and the researchers found they could perform the reversal up to seven times; with each reversal, a small percentage of the polymers fails to switch back, which eventually causes the material to fall apart.

When the material is in the small-cluster state, it becomes up to 10 times softer and more dynamic. “They can flow when heated up, which means you could cut them and upon mild heating that damage will heal,” Johnson says.

This approach overcomes the tradeoff that usually occurs with self-healing materials, which is that structurally they tend to be relatively weak. In this case, the material can switch between the softer, self-healing state and a more rigid state.

“Reversibly switching topology of polymer networks has never been reported before and represents a significant advancement in the field,” says Sergei Sheiko, a professor of chemistry at the University of North Carolina, who was not involved in the research. “Without changing network composition, photoswitchable ligands enable remotely activated transition between two topological states possessing distinct static and dynamic properties.”

Self-healing materials

In this paper, the researchers used the polymer polyethylene glycol (PEG) to make their material, but they say this approach could be used with any kind of polymer. Potential applications include self-healing materials, although for this approach to be widely used, palladium, a rare and expensive metal, would likely have to be replaced by a cheaper alternative.

“Anything made from plastic or rubber, if it could be healed when it was damaged, then it wouldn’t have to be thrown away. Maybe this approach would provide materials with longer life cycles,” Johnson says.

Another possible application for these materials is drug delivery. Johnson believes it could be possible to encapsulate drugs inside the larger cages, then expose them to green light to make them open up and release their contents. Applying green light could enable recapture of the drugs, providing a novel approach to reversible drug delivery.

The researchers are also working on creating materials that can reversibly switch from a solid state to a liquid state, and on using light to create patterns of soft and rigid sections within the same material.

The research was funded by the National Science Foundation.

Categories: In the News

X-ray data may be first evidence of a star devouring a planet

MIT News - 7 hours 17 min ago

For nearly a century, astronomers have puzzled over the curious variability of young stars residing in the Taurus-Auriga constellation some 450 light years from Earth. One star in particular has drawn astronomers’ attention. Every few decades, the star’s light has faded briefly before brightening again.

In recent years, astronomers have observed the star dimming more frequently, and for longer periods, raising the question: What is repeatedly obscuring the star? The answer, astronomers believe, could shed light on some of the chaotic processes that take place early in a star’s development.

Now physicists from MIT and elsewhere have observed the star, named RW Aur A, using NASA’s Chandra X-Ray Observatory. They’ve found evidence for what may have caused its most recent dimming event: a collision of two infant planetary bodies, which produced in its aftermath a dense cloud of gas and dust. As this planetary debris fell into the star, it generated a thick veil, temporarily obscuring the star’s light.

“Computer simulations have long predicted that planets can fall into a young star, but we have never before observed that,” says Hans Moritz Guenther, a research scientist in MIT’s Kavli Institute for Astrophysics and Space Research, who led the study. “If our interpretation of the data is correct, this would be the first time that we directly observe a young star devouring a planet or planets.”

The star’s previous dimming events may have been caused by similar smash-ups, of either two planetary bodies or large remnants of past collisions that met head-on and broke apart again.

“It’s speculation, but if you have one collision of two pieces, it’s likely that afterward they may be on some rogue orbits, which increases the probability that they will hit something else again,” Guenther says.

Guenther is the lead author of a paper detailing the group’s results, which appears today in the Astronomical Journal. His co-authors from MIT include David Huenemoerder and David Principe, along with researchers from the Harvard-Smithsonian Center for Astrophysics and collaborators in Germany and Belgium.

A star cover-up

Scientists who study the early development of stars often look to the Taurus-Auriga Dark Clouds, a gathering of molecular clouds in the constellations of Taurus and Auriga, which host stellar nurseries containing thousands of infant stars. Young stars form from the gravitational collapse of gas and dust within these clouds. Very young stars, unlike our comparatively mature sun, are still surrounded by a rotating disk of debris, including gas, dust, and clumps of material ranging in size from small dust grains to pebbles, and possibly to fledgling planets.

“If you look at our solar system, we have planets and not a massive disk around the sun,” Guenther says. “These disks last for maybe 5 million to 10 million years, and in Taurus, there are many stars that have already lost their disk, but a few still have them. If you want to know what happens in the end stages of this disk dispersal, Taurus is one of the places to look.”

Guenther and his colleagues focus on stars that are young enough to still host disks. He was particularly interested in RW Aur A, which is at the older end of the age range for young stars, as it is estimated to be several million years old. RW Aur A is part of a binary system, meaning that it circles another young star, RW Aur B. Both these stars are about the same mass as the sun.

Since 1937, astronomers have recorded noticeable dips in the brightness of RW Aur A every few decades. Each dimming event appeared to last for about a month. In 2011, the star dimmed again, this time for about half a year. The star eventually brightened, only to fade again in mid-2014. In November 2016, the star returned to its full luminosity.

Astronomers have proposed that this dimming is caused by a passing stream of gas at the outer edge of the star’s disk. Still others have theorized that the dimming is due to processes occurring closer to the star’s center.

“We wanted to study the material that covers the star up, which is presumably related to the disk in some way,” Guenther says. “It’s a rare opportunity.”

An iron-clad signature

In January 2017, RW Aur A dimmed again, and the team used NASA’s Chandra X-Ray Observatory to record X-ray emission from the star.

“The X-rays come from the star, and the spectrum of the X-rays changes as the rays move through the gas in the disk,” Guenther says. “We’re looking for certain signatures in the X-rays that the gas leaves in the X-ray spectrum.”

In total, Chandra recorded 50 kiloseconds, or almost 14 hours of X-ray data from the star. After analyzing these data, the researchers came away with several surprising revelations: the star’s disk hosts a large amount of material; the star is much hotter than expected; and the disk contains much more iron than expected — not as much iron as is found in the Earth, but more than, say, a typical moon in our solar system. (Our own moon, however, has far more iron than the scientists estimated in the star’s disk.)

This last point was the most intriguing for the team. Typically, an X-ray spectrum of a star can show various elements, such as oxygen, iron, silicon, and magnesium, and the amount of each element present depends on the temperature within a star’s disk.

“Here, we see a lot more iron, at least a factor of 10 times more than before, which is very unusual, because typically stars that are active and hot have less iron than others, whereas this one has more,” Guenther says. “Where does all this iron come from?”

The researchers speculate that this excess iron may have come from one of two possible sources. The first is a phenomenon known as a dust pressure trap, in which small grains or particles such as iron can become trapped in “dead zones” of a disk. If the disk’s structure changes suddenly, such as when the star’s partner star passes close by, the resulting tidal forces can release the trapped particles, creating an excess of iron that can fall into the star.

The second theory is for Guenther the more compelling one. In this scenario, excess iron is created when two planetesimals, or infant planetary bodies, collide, releasing a thick cloud of particles. If one or both planets are made partly of iron, their smash-up could release a large amount of iron into the star’s disk and temporarily obscure its light as the material falls into the star.

“There are many processes that happen in young stars, but these two scenarios could possibly make something that looks like what we observed,” Guenther says.

He hopes to make more observations of the star in the future, to see whether the amount of iron surrounding the star has changed — a measure that could help researchers determine the size of the iron’s source. For instance, if the same amount of iron appears in, say, a year, that may signal that the iron comes from a relatively massive source, such as a large planetary collision, versus if there is very little iron left in the disk.

“Much effort currently goes into learning about exoplanets and how they form, so it is obviously very important to see how young planets could be destroyed in interactions with their host stars and other young planets, and what factors determine if they survive,” Guenther says.

Categories: In the News

Study finds climate determines shapes of river basins

MIT News - Tue, 07/17/2018 - 23:59

There are more than 1 million river basins carved into the topography of the United States, each collecting rainwater to feed the rivers that cut through them. Some basins are as small as individual streams, while others span nearly half the continent, encompassing, for instance, the whole of the Mississippi river network.

River basins also vary in shape, which, as MIT scientists now report, is heavily influenced by the climate in which they form. The team found that in dry regions of the country, river basins take on a long and thin contour, regardless of their size. In more humid environments, river basins vary: Larger basins, on the scale of hundreds of kilometers, are long and thin, while smaller basins, spanning a few kilometers, are noticeably short and squat.

The difference, they found, boils down to the local availability of groundwater. In general, river basins are shaped by rainfall, which erodes the land as it drains down into a river or stream. In humid environments, a large fraction of rainfall seeps into the Earth, creating a water table, or a local reservoir of groundwater. When that groundwater seeps back out, it can also cut into a basin, further eroding and shifting its shape.

The researchers found that smaller basins that are formed in humid climates are heavily shaped by the local groundwater, which acts to carve out shorter, wider basins. For much larger basins that cover a more expansive geographic area, the availability of groundwater may be less consistent, and therefore plays less of a role in a basin’s shape.

The results, published today in the Proceedings of the Royal Society A, may help researchers identify ancient climates in which basins originally formed, both on Earth and beyond.

“This is the first time in which the shape of river networks has been related to climate,” says Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, and co-director of MIT’s Lorentz Center. “Work like this may help scientists infer the kind of climate that was present when river networks were initially incised.”

Rothman’s co-authors are first author and former graduate student Robert Yi, former visiting graduate student Álvaro Arredondo, graduate student Eric Stansifer, and former postdoc Hansjörg Seybold of ETH Zurich.

A climate connection

In previous work published in 2012, Rothman and his colleagues identified a surprisingly universal connection between groundwater and the way in which rivers split, or branch. The team formulated a mathematical model to discover that, in regions where erosion is caused mainly by the seepage of groundwater, rivers branch at a common angle of 72 degrees. In follow-up work, they found that this common branching angle held up in humid environments, but in dryer regions, rivers tended to split at narrower angles of around 45 degrees.

“River networks form these beautiful branched structures, and previous work has helped explain the angles at which rivers join together to form these structures,” Yi says. “But each river is also intimately connected to a basin, which is the area of land that it drains rainwater from. So we suspected that the shapes of bains could contain some similar geometric curiosities.”

The team set out to find a similar universal pattern in the shape of river basins. To do this, they accessed datasets containing detailed maps of all the rivers and basins in the contiguous United States — more than 1 million in total — along with datasets containing two climatic parameters for every region in the country: precipitation rate and potential evapotranspiration, or the rate at which surface water would evaporate if it were present.

The datasets contained estimates of each river basin’s area, which the researchers combined with the length of each basin’s river to calculate a basin’s width. They then noted for each basin, an aspect ratio — the ratio of a basin’s length to width, which gives an idea of a basin’s overall shape. They also calculated each basin’s aridity index — the ratio between the regional precipitation rate and potential evapotranspiration — which indicates whether the basin resides in a humid or dry environment.

When they plotted each basin’s aspect ratio against the local aridity index, they found an interesting trend: Basins in dry climates, regardless of size, took on long, thin shapes, as did large basins in humid environments. However, smaller basins in similarly humid regions looked significantly wider and shorter. 

“We found that arid basins roughly kept their shape with size, but humid basins got narrower as they grew larger,” Yi says. “That confused us for a long time.”

Answers in the ground

The researchers suspected that the dichotomy between dry- and humid-type shapes stemmed from their previous observations of branching rivers: In humid climates, groundwater plays an additional role to rainfall in creating wider branches of a rivers, compared with in drier climates. They reasoned that groundwater may play a similar role in widening a river’s basin.

To check their hypothesis, they looked at characteristics of each basin’s geology, such as the types of rock and soil underlying the basin, and the depth to which groundwater might penetrate. In general, they found that in drier climates, any rainwater that seeped into the ground would dribble deep below the surface, like liquid running through a Brillo pad. Any resulting reservoir, or water table, would be too deep for groundwater to come back up to the surface.

In contrast, in more humid environments, water is more likely to saturate the soil, like tap water soaking a damp sponge. In these climates, water would seep into the ground, creating large water tables close to the surface.

The team then computed the extent to which stream locations corresponded to locations where groundwater emerged. They found a greater correspondence where there was more groundwater seeping out around river basins in humid climates, versus in drier climates. This suggests that groundwater plays a bigger role in carving out humid basins, creating wider, more squat shapes, in contrast to the longer, thinner shapes of dry-climate river basins.

This groundwater effect may be especially pronounced at smaller, more local scales over several kilometers. At much larger scales, spanning nearly half the continent, the group found river basins, even in humid environments, took on long, thin contours, which may be attributed to the fact that, over such a vast area, the interaction between groundwater and the large-scale structure of river networks is relatively weak.

“Our paper establishes a new, large-scale connection between hydrogeology and geomorphology,” Rothman says. “It also represents an unusual application of the physics of pattern formation. … All this turns out to be connected with fractal geometry. Thus in some sense we are finding a surprising connection between climate and the fractal geometry of river networks.”

This research was supported, in part, by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences, Chemical Sciences, Geosciences and Biosciences Division.

Categories: In the News

A new way to measure women’s and girls’ empowerment in impact evaluations

MIT News - Tue, 07/17/2018 - 16:55

Women make up half the world’s population, but just 12 percent of the world’s heads of state and government. This disparity underscores a persistent reality in the 21st century: Despite steady advances in women’s rights in recent decades, gender norms and biases continue to constrain human potential around the world.

A growing number of policymakers believe that investing in women and girls’ empowerment can reduce these and other gender-based inequalities. The United Nations' Sustainable Development Goal 5, for example, seeks to achieve gender equality and empower all women and girls. Increasing empowerment is also seen as a promising strategy to unlock greater economic growth in low- and middle-income countries.

In order to design effective policies and programs, however, researchers, policymakers, and practitioners must be able to accurately measure women’s and girls’ empowerment. A new research resource from MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) addresses this challenge.

Co-authored by Rachel Glennerster, chief economist at the UK Department for International Development and former executive director of J-PAL; and Lucia Diaz-Martin and Claire Walsh of J-PAL, the “Practical Guide to Measuring Women’s and Girls’ Empowerment” offers guidance on strategies to help navigate and overcome common challenges in effectively measuring empowerment in impact evaluations.

Evaluating impact

Researchers can rarely observe people’s decision-making in real-time, and survey questions that ask study participants about decision-making do not always lead to reliable responses, particularly when questions touch on sensitive topics. For these reasons, it can be hard for researchers and practitioners to identify whether a social program or intervention actually increased people’s decision-making power, a common measure of empowerment.

Beyond survey challenges, questions arise around which outcomes best capture changes in empowerment. For example, should researchers focus on measuring educational attainment? Agency in household or community decision-making? Employment and control over income? Women and girls experience constraints that are deeply tied to their specific context. Because these constraints can vary so widely, what empowerment looks like for female students in Ghana might be very different from what it looks like for women living in a rural village in India. Researchers often grapple with how to generalize lessons learned from a particular program when what empowerment looks like can vary greatly around the world.

J-PAL’s new guide draws on strategies from multiple academic disciplines to tackle these measurement issues. Rich with case studies and concrete examples, it outlines actionable steps to improve measurement.

Determine local context

Understanding the local context is key. Before trying to measure empowerment in an impact evaluation, researchers must have a nuanced picture of the local context and the specific barriers that women face when trying to make meaningful choices about their own lives. Qualitative research methods such as semi-structured interviews, needs assessments, direct observation, and focus groups can help by creating repeated opportunities to listen closely to the people living in a particular community. A measurement strategy to quantify empowerment is only as good as researchers and practitioners’ understanding of gender and power dynamics in the local context.

In an evaluation in Bangladesh, for example, Glennerster and co-authors were interested in measuring adolescent girls’ mobility. After several focus groups with adolescent girls they learned that asking the generic question “How far away can you travel from home by yourself?” would not capture how a girl’s mobility was constrained, because the answer depended on what she was doing and for whom. Girls could travel to and from school alone, but they could not travel alone to do things that only had value to them, like going to local fairs. Since empowerment is about people’s abilities to make choices that matter to them, researchers added a question about mobility for activities that only had value to the adolescent girls in addition to the usual questions about going to school or visiting relatives.

Develop a theory for how intervention generates impact

Developing a clear theory of how an intervention generates impact can help researchers select accurate indicators of empowerment. To identify the outcomes of a women’s empowerment program (the change or impact we expect to see) and indicators (observable signals we use to measure that change), researchers and practitioners need a deep understanding of the pathways through which the program can affect people’s lives.

Mapping these pathways, from program inputs (like funding and staff time) to long-term outcomes, is also known as a “theory of change” and results in documentation of a program’s logical chain of results. This mapping process helps clarify appropriate measurement indicators, and helps researchers identify which assumptions must hold true for the program to succeed.

Develop a plan and carry out testing

Once researchers decide what outcomes to measure, they should develop and pilot data collection instruments in communities similar to ones where the evaluation will take place. This is an important reality check to make sure surveys work in local contexts.

For example, many commonly used survey questions to measure household decision-making are hard to ask and answer in practice. In Bangladesh, Glennerster and co-authors found that women gave very different answers to the general question, “Who usually makes decisions about healthcare for yourself: you, your husband, you and your husband jointly, or someone else?” and the more specific question, “If you ever need medicine, could you go buy it yourself?” Piloting different versions of a question can help researchers learn whether they are truly capturing the information they think they are. 

Non-survey instruments can also be powerful for measuring things surveys can’t capture accurately — like gender bias. In a study on female leaders in India for example, researchers randomly assigned survey participants to hear one of two identical audio recordings of a short speech by a political leader, one spoken by a man and the other by a woman. They then asked participants to rate the leader’s effectiveness. Because the gender was the only difference between the two recordings, researchers could use this technique to measure bias against female leaders.

After researchers conduct a comprehensive pilot and incorporate lessons learned, they should design a practical data collection plan. Although data collection can be full of unexpected challenges, finding reliable, culturally appropriate, and convenient methods and times to collect survey data can help overcome measurement errors.

Why measure empowerment?

“If measurement techniques are inaccurate, it can be difficult to understand whether programs are effective, and how to improve on existing approaches,” says J-PAL’s Claire Walsh. Ensuring that measurement tools are reliable and precise can help researchers avoid drawing inaccurate conclusions about the impact of a program.

J-PAL recently announced new efforts to further expand the base of policy-relevant evidence related to gender and women’s empowerment. Alongside this research, J-PAL continues to create practical resources to support policymakers, practitioners, and researchers in effectively incorporating analysis of gender dynamics and impacts into their impact evaluations. For more information about this work, visit povertyactionlab.org/gender.

Categories: In the News

Challenge seeks innovations to improve wellbeing in aging populations

MIT News - Tue, 07/17/2018 - 15:15

A global innovation challenge for the improvement of well-being in aging populations was recently announced by the MIT Age Lab and a group of industry, academic, and government partners affiliated with Massachusetts Governor Charlie Baker’s Council to Address Aging. In Good Company: The 2018 Optimal Aging Challenge seeks to develop breakthrough technologies, community resources, and solutions that reduce social isolation and loneliness among older adults.

Despite the advent of lightning-speed technological connectivity, 29 percent of older adults are socially isolated, and both isolation and loneliness are known to have adverse consequences on individual and community health. Research from the American Psychological Association suggests that the loneliness epidemic now represents a threat to public health rivaling that of obesity.

“Led by our Council to Address Aging, Massachusetts is thinking differently about aging and we are proud to be one of the few states in the country certified by AARP for our commitment to become more ‘age-friendly,’” said Governor Baker. “The In Good Company Challenge is a great opportunity to improve the lives of older adults. We look forward to seeing what this challenge will develop so that Massachusetts can help ensure that those who grew up, raised families and built our communities, can continue to contribute their energy, experience, and talents toward making Massachusetts a great place.”

Competition sponsors of the In Good Company: Optimal Aging Challenge include GE Healthcare, MIT’s AgeLab, and Benchmark Senior Living, in collaboration with three members of the Governor’s Council to Address Aging in Massachusetts Innovation and Technology Workgroup. Challenge awards are being funded by the MIT AgeLab and Benchmark Senior Living. Challenge administration is being delivered by GE GENIUSLINK.

“The Governor’s aging initiative, coupled with this challenge, is both an opportunity to improve the lives of older adults in Massachusetts and an unprecedented call to create a new economic engine of innovation in the Commonwealth driven by a world that is living longer and wanting to live better,” said Joseph Coughlin, director of MIT AgeLab. 

Representatives from the competition sponsors and the Governor’s Council on Aging will serve as judges for the challenge and are seeking proposals across four key pillars:

  • caregiving,
  • transportation services,
  • eldercare housing solutions, and
  • employment and volunteerism opportunities among older populations.

Judges will evaluate entries based upon, but not limited to, their prospective applicable market size, accessibility across diverse populations and commercial viability. Up to four of the most promising entries will receive an initial cash prize of $5,000 each and may have an opportunity to participate in public and private endeavors with prize sponsors and their partner entities to develop their solution such that it can better serve the older population and their networks.

“There’s a perception that our aging communities have been underserved by advances in technology, as well as innovations in business models, service models, and beyond; with this initiative, we hope to start redressing that imbalance,” said Ger Brophy, head of cell therapy, life sciences at GE Healthcare. “There is a deep interest in the transformational ideas and creativity this challenge will inspire,” added Tom Grape, chair and CEO of Benchmark Senior Living. “When implemented, these ideas will connect the older adults we respect and love to what’s meaningful and possible at every stage of their lives.”

To participate, submit an entry by Sept. 28 at 5 p.m. EDT. Judges will evaluate submissions throughout October and November and announce winners in December.

Categories: In the News

School of Engineering second quarter 2018 awards

MIT News - Tue, 07/17/2018 - 12:20

Members of the MIT engineering faculty receive many awards in recognition of their scholarship, service, and overall excellence. Every quarter, the School of Engineering publicly recognizes their achievements by highlighting the honors, prizes, and medals won by faculty working in our academic departments, labs, and centers.

The following awards were given from April through June, 2018. Submissions for future listings are welcome at any time.

Emilio Baglietto, of the Department of Nuclear Science and Engineering, won the Ruth and Joel Spira Award for Distinguished Teaching on May 14.

Hari Balakrishnan, Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the HKN Best Instructor Award on May 18.

Robert C. Berwick, of the Department of Electrical Engineering and Computer Science, won the Jerome H. Saltzer Award for Excellence in Teaching on May 18.

Michael Birnbaum, of the Department of Biological Engineering and the Koch Institute for Integrative Cancer Research, became a 2018 Pew-Stewart Scholar for Cancer Research on June 14.

Lydia Bourouiba, of the Department of Civil and Environmental Engineering, won the Smith Family Foundation Odyssey Award on June 25.

Michele Bustamante of the Materials Research Laboratory, was awarded a 2018-19 MRS/TMS Congressional Science and Engineering Fellowship on May 22.

Oral Buyukozturk, of the Department of Civil and Environmental Engineering, won the George W. Housner Medal for Structural Control and Monitoring on May 31.

Luca Carlone of the Department of Aeronautics and Astronautics, won the IEEE Transactions on Robotics “King-Sun Fu" Best Paper Award on May 24.

Gang Chen, of the Department of Mechanical Engineering, was elected a 2018 fellow to the American Academy of Arts and Sciences on April 18.

Erik Demaine, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, was awarded the Burgess (1952) and Elizabeth Jamieson Prize for Excellence in Teaching on May 18.

Srinivas Devadas, of the Department of Electrical Engineering and Computer Science, won the Bose Award for Excellence in Teaching in May.

Thibaut Divoux, of the Department of Civil and Environmental Engineering, won the 2018 Early Career Arthur B. Metzner Award of the Rheology Society on May 3.

Dennis M. Freeman, of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, won an Innovative Seminar Award on May 16; he also won the Burgess (1952) and Elizabeth Jeamieson Prize for Excellence in Teaching on May 18.

Neville Hogan, of the Department of Mechanical Engineering, won the 2018 EMBS Academic Career Achievement Award on May 10.

Gim P. Hom, of the Department of Electrical Engineering and Computer Science, was honored with the IEEE/Association for Computing Machinery Best Advisor Award on May 18.

Rohit Karnik, of the Department of Mechanical Engineering, and Regina Barzilay and John N. Tsitsiklis, of the Department of Electrical Engineering and Computer Science, won the Ruth and Joel Spira Award for Distinguished Teaching in May.

Dina Katabi, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, was presented with an honorary degree from The Catholic University of America on May 12; she also won the Association for Computing Machinery 2017 Prize in Computing on April 4.

Rob Miller, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the Richard J. Caloggero Award on May 18.

Eytan Modiano, of the Department of Aeronautics and Astronautics and the Laboratory for Information and Decision Systems, won the IEEE Infocom best paper award on April 18.

Stefanie Mueller, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, received an honorable mention for the Association for Computing Machinery Doctoral Dissertation Award on June 23. She also won the EECS Outstanding Educator Award on May 18.

Dava J. Newman, of the Department of Aeronautics and Astronautics, won the AIAA Jeffries Aerospace Medicine and Life Sciences Research Award on May 4.

Christine Ortiz, of the Department of Materials Science and Engineering, was awarded a J-WEL Grant on May 7.

Ronitt Rubinfeld, of the Department of Electrical Engineering and Computer Science, won the Capers and Marion McDonald Award for Excellence in Mentoring and Advising in May.

Jennifer Rupp, of the Department of Materials Science and Engineering, won a Displaying Futures Award on June 12.

Alex K. Shalek, of the Institute for Medical Engineering and Science, has been named one of the 2018 Pew-Stewart Scholars for Cancer Research on June 14.

Alex Slocum, of the Department of Mechanical Engineering, won the Ruth and Joel Spira Outstanding Design Educator Award on June 11.

Michael P. Short, of the Department of Nuclear Science and Engineering won the Junior Bose Award in May.

Joseph Steinmeyer, of the Department of Electrical Engineering and Computer Science, won the Louis D. Smullin ('39) Award for Excellence in Teaching on May 18.

Christopher Terman, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won a MIT Gordon Y Billard Award on May 10.

Tao B. Schardl, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won an EECS Outstanding Educator Award on May 18.

Yang Shao-Horn, of the Department of Mechanical Engineering, won the Faraday Medal on April 19.

Vinod Vaikuntanathan, of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, won the Harold E. Edgerton Faculty Achievement Award on April 26.

Kripa Varanasi, of the Department of Mechanical Engineering, won the Gustus L. Larson Memorial Award on May 10.

David Wallace, of the Department of Mechanical Engineering, was honored with the Ben C. Sparks Medal on April 27.

Amos Winter, of the Department of Mechanical Engineering, was named a leader in New Voices in Sciences, Engineering, and Medicine on June 8.

Bilge Yildiz, of the Department of Nuclear Science and Engineering and the Department of Materials Science and Engineering, won the Ross Coffin Purdy Award on June 22.

Laurence R. Young, of the Department of Aeronautics and Astronautics and the Institute for Medical Engineering and Science, won the Life Sciences and Biomedical Engineering Branch Aerospace Medical Association Professional Excellence Award on April 27.

Categories: In the News

3Q: Julie Newman on MIT’s pioneering solar purchase

MIT News - Mon, 07/16/2018 - 23:59

In 2016, MIT announced that it would neutralize 17 percent of its carbon emissions through a unique collaboration with Boston Medical Center and Post Office Square Redevelopment Corporation: The three entitites formed an alliance to buy solar power, demonstrating a partnership model for climate-change mitigation and the advancement of large scale solar development.

Boston Mayor Martin Walsh recently announced that his city will undertake a similar but much larger effort to purchase solar energy in conjunction with cities across the U.S., including Chicago, Houston, Los Angeles, Orlando, and Portland, Oregon. At the time of this announcement, Walsh called upon more cities to join in this collective renewable energy initiative. In describing the agreement, Boston officials said the effort is modeled on MIT’s 2016 effort.

Julie Newman, the Institute’s director of sustainability, spoke with MIT News about the power of MIT’s pioneering model for purchasing solar energy.

Q: Can you describe MIT’s alliance with Boston Medical Center and Post Office Square Redevelopment Corporation to purchase solar energy?

A: Climate partnerships are not new to cities like Boston and Cambridge, where urban stakeholders work together to try to advance solutions for climate mitigation and resiliency. In Boston, MIT participates on the city’s Green Ribbon Commission, which is co-chaired by Mayor Walsh and includes leaders from Boston’s business, institutional, and civic sectors. In MIT’s host city of Cambridge, the Institute works collaboratively with the municipality on a range of initiatives related to solar energy, resiliency planning, building energy use, and other efforts focused on climate change.

In October 2016 MIT, Boston Medical Center, and Post Office Square Redevelopment Corporation formed an alliance to buy electricity from a large new solar power installation. The goal was to add carbon-free energy to the grid and, equally important, we wanted to demonstrate a partnership model for other organizations.

Our power purchase agreement, or PPA, enabled the construction of Summit Farms, a 650-acre, 60-megawatt solar farm in North Carolina. The facility is now operational and is one of the largest renewable-energy projects ever built in the U.S. through an alliance like this.

MIT committed to buying 73 percent of the power generated by Summit Farms’ 255,000 solar panels, with BMC purchasing 26 percent and POS purchasing the remainder. At the time, MIT’s purchase of 44 megawatts — equivalent to 40 percent of the Institute’s 2016 electricity use — was among the largest publicly announced purchases of solar energy by any American college or university.

Summit Farms would not have been built without the commitments from MIT and its partners. The emissions-free power it generates every year represents an annual abatement of carbon dioxide emissions equivalent to removing more than 25,000 cars from the road.

A unique provision in the agreement between MIT and Summit Farms will provide MIT researchers with access to a wealth of data on performance parameters at the North Carolina site. This research capability amplifies the project’s impact and contributes to making the MIT campus a true living laboratory for advances in technology, policy, and business models.

Q: What exactly has the City of Boston announced that it plans to do, and how is this modeled on MIT’s solar-power collaboration?

A: MIT, our collaborators, the city of Boston, and the numerous other cities joining Mayor Walsh all share an interest in reducing carbon emissions at the global scale. We want solutions that will transform the energy market, create clean-energy jobs, and sustain healthy, thriving communities. In collaboration, we can have a greater impact than we could if we tried to mitigate emissions on an institute-by-institute or city-by-city basis. By combining our purchasing power, we can escalate the demand for renewable energy more rapidly, triggering new development and installation of renewables through the energy sector in the U.S. 

Our project used a convening force, the group A Better City, to invite disparate entities to combine efforts to increase demand for renewable energy. Similarly, Mayor Walsh has called upon leading members of the Climate Mayors Network, representing over 400 cities and 70 million people, to combine their collective purchasing and bargaining power to reduce energy costs and spark the creation of large-scale renewable energy projects across the country. This invitation has launched a coast-to-coast effort to increase the demand for renewable energy across the eight regional grids.

Q: Has the Institute fielded expressions of interest from other entities interested in trying this model? Is there evidence that it will spread further?

A: We are excited about this solution, and we’ve shared this model of solar-collaboration with peers across the country. We’ve hosted webinars, meetings, and presentations, and received immediate and passionate interest from statewide systems, large corporations, and multiuniversity partnerships that have since pursued collective renewable energy projects. We can now point to a dozen or more projects that have been inspired by this model and  are pursuing renewable energy aggregation.

It is important to note that the success of an external collaboration is only as strong as our internal collaboration. The development of the MIT power purchase agreement relied on expertise from more than eight academic and administrative departments, including researchers from related fields, engineers in our utilities area, and staff with expertise in purchasing, finance, and legal areas. We are on the verge of tapping back into these partnerships as we look ahead to determine what is next.

We now have real-time data on energy, emissions avoidance, and financial performance and can evaluate the real world impacts of our project. These findings will influence our thinking going forward. We are considering such questions as how can MIT continue to amplify our efforts? How can we shape our energy impact in the world, and what is the best way to pursue our interest in collectively transforming the energy market? We are continuously broadening our clean energy knowledge base, from multidimensional carbon-accounting frameworks to the exploration of new technologies. Along the way, we have learned that the location of a new wind or solar project matters significantly to its carbon dioxide reduction impact. (The project has a greater benefit if it’s located in a dirtier power grid.) This will inform our work as we actively pursue new partnerships for future scenarios.

Categories: In the News

Cambridge Community Foundation Raises Funds For Local Immigrants

Scout Cambridge - Mon, 07/16/2018 - 13:26

As the United States becomes increasingly unfriendly to immigrants at the federal level, the Cambridge Community Foundation (CCF) is working to support immigrants within the bounds of the city. The CCF supports projects that help Cambridge residents from all backgrounds. The nonprofit, founded in 1916, has provided help to immigrants in the community for years—by […]

The post Cambridge Community Foundation Raises Funds For Local Immigrants appeared first on Scout Cambridge.

Categories: In the News

Kristala Prather: Advancing energy-efficient biochemistry

MIT News - Mon, 07/16/2018 - 12:20

Kristala Jones Prather will be the first person to tell you the difference between science and engineering. She’ll also be the first to tell you how important both are to the research process.

“Science is about discovery, and engineering is about application,” Prather says. “The beauty of being a scientist and doing discovery work is the freedom and creativity. For engineers, it’s all about how these discoveries can be applied and solve problems in the real world.”

She would know: Over the course of her career, she’s been both. While working in bioprocess research and development at Merck, Prather delved into the engineering side of biology and chemistry. “My decision to work in industry before pursuing an academic career was very intentional,” she says. “I wanted to get a sense of what to think about when bringing products to market. How is new technology adopted? Can you improve upon existing processes?”

Prather’s early years in industry shaped her knowledge of the process pipeline she is currently seeking to streamline through scientific inquiry. As the Arthur D. Little Professor of Chemical Engineering at MIT, she conducts research that ties together the fields of energy, biology, and chemistry. While biology and energy are most often connected in discussions of biofuels, Prather’s research focuses on a different kind of energy advancement: more energy-efficient processes for the manufacture of biochemicals.

“I tell my students, look at the carpet in this room,” Prather says. “The probability is high that 50 percent or more of the materials in that carpet were produced using oil. So how do we decrease that number?” Prather’s lab works on engineering bacteria to produce biochemicals, thus replacing the fossil-fuel based processes currently responsible for making so many of the world’s materials.

Such research requires expertise in chemical engineering, biological engineering, and genetics. Using genetic engineering, Prather and her team can manipulate the genes of microbes to control the kind and quantity of products they produce. These products could be anything from insulin or human growth hormone to the synthetic materials whose production would otherwise have required the use of oil or other fossil-based products.

“The goal in exploring bio-based methods for creating these chemicals is to design a less energy-intensive process that is still cost-competitive,” she says. “We want to use less energy to get to the same molecules.”

Yet for all the engineering knowledge that Prather gained while she was working in industry, something major was missing.

“When I looked at the part of my job I liked best, it had to do with mentoring young scientists,” she says. “Training and teaching them how to be independent researchers in their fields was the most important and enjoyable part of the job to me.” This realization spurred Prather to make the switch back to academia that she had always been planning. “In industry, you eventually move away from mentoring younger researchers as you move up in the ranks,” she says. “In academia, mentoring is the kernel at the center that always stays the same.”

With her current classes, Prather has ample opportunity to mentor the next generation of MIT scientists and engineers. She teaches 10.10 (Introduction to Chemical Engineering) to first-year and sophomore undergraduates, as well as 10.542 (Biochemical Engineering) for graduate students and upper-level undergrads. Opportunities to reach students present themselves outside of the classroom as well. In fall 2017, Prather was invited by MIT President L. Rafael Reif to be part of a small group of professors addressing incoming first-years at a welcome assembly their first week on campus.

The advice she gave to students then is a message she believes all MIT students need to hear.

“You need to embrace failure,” she says. “Recognize that not everything you attempt is going to work out.”

But there’s an important corollary to this advice. “Students, especially at MIT, should also remember: You belong here,” she says. “It doesn’t matter how many AP classes you come in with or anything like that. And there are a lot of people here to help you get through.”

When asked what the most challenging part of being a professor is, Prather says: “Just how much stuff there is to do. Not the volume, but the diversity — that mix of administrative and academic work.” Still, the most rewarding part of the job is easy to pinpoint. “The students,” she says. “The day a student in my lab defends their thesis is the happiest and saddest day of my life. Happiest because I’m so proud of what they’ve done. But saddest because the time has come for them to leave.”

Prather and her colleague Angela Belcher, the James Mason Crafts Professor of Biological Engineering and Materials Science at MIT, are advancing the future of energy bioscience through their work as co-directors of MITEI’s Low-Carbon Energy Center for Energy Bioscience Research. The goal of the center, Prather says, is to “use the toolbox of biology to engineer solutions to clean energy challenges.”

Prather and Belcher are bringing together a host of biological and chemical engineers from across the Institute to perform research in a wide range of areas. Prather’s own work using genetics to engineer biochemicals is complemented by myriad other projects her colleagues have in the works. Research topics range from biochemical remediation, or the use of bacteria to clean up oil spills; to biological generation of liquid fuels from natural gas; to engineering a virus capable of improving solar cell efficiency.

“We’re really trying to pull together the collective talents of researchers at MIT who are using biology to solve a range of problems,” she says. The results could have positive impacts on critical fields including renewable energy, clean fuel sources, infrastructure, storage, and chemical processing and production.

This article appears in the Spring 2018 issue of Energy Futures, the magazine of the MIT Energy Initiative.

Categories: In the News

Connor Coley named 2018 DARPA Riser

MIT News - Mon, 07/16/2018 - 12:10

The U.S. Defense Advanced Research Projects Agency (DARPA) has honored Connor Coley, who is currently pursuing his graduate degree in chemical engineering, as one of 50 DARPA Risers for 2018.

The award states that DARPA Risers are considered by the agency to be “up-and-coming standouts in their fields, capable of discovering and leveraging innovative opportunities for technological surprise — the heart of DARPA’s national security mission.”

Currently a member of the Klavs Jensen and William Green research groups, Coley is focused on improving automation and computer assistance in synthesis planning and reaction optimization with medicinal chemistry applications. He is more broadly interested in the design and construction of automated microfluidic platforms for analytics (e.g. kinetic or process understanding) and on-demand synthesis.

The goal of many synthetic efforts, particularly in early stage drug discovery, is to produce a target small molecule of interest. At MIT, Coley’s early graduate research focused on streamlining organic synthesis from an experimental perspective: screening and optimizing chemical reactions in a microfluidic platform using as little material as possible.

But even with an automated platform to do just that, researchers need to know exactly what reaction to run. They must first figure out the best synthetic route to make the target compound and then turn to the chemical literature to define a suitable parameter space to operate within. As part of the DARPA Make-It program, Coley and his colleagues started working toward a much more ambitious goal. Instead of automating only the execution of reactions, could a researcher automate the entire workflow of route identification, process development, and experimental execution?

Coley's recent research has focused on various aspects of computer-aided synthesis planning to help make a fully autonomous synthetic chemistry platform, leveraging techniques in machine learning to meaningfully generalize historical reaction data. This includes questions of how best to propose novel retrosynthetic pathways and validate those suggestions in silico before carrying them out in the laboratory. The overall goal of his work is to develop models and computational approaches that — in combination with more traditional automation techniques — will improve the efficiency of small molecule discovery.

“It's been a privilege to participate in the Make-It program and I am grateful for being named a DARPA Riser,” Coley says. “I'm excited to take part in the D60 anniversary event and talk about my ideas for how this work can be extended to more broadly transform the process of molecular discovery.”

Coley received his BS in chemical engineering from Caltech in 2014 and is a recipient of MIT’s Robert T. Haslam Presidential Graduate Fellowship.

Coley will participate in D60, DARPA’s 60th Anniversary Symposium, Sept. 5-7 at Gaylord National Harbor. D60 will provide attendees the opportunity to engage with up-and-coming innovators, including some of today’s most creative and accomplished scientists and technologists. DARPA works to inspire attendees to explore future technologies, their potential application to tomorrow’s technical and societal challenges, and the dilemmas those applications may engender. D60 participants will have the opportunity to be a part of the new relationships, partnerships, and communities of interest that this event aims to foster, and advance dialogue on the pursuit of science in the national interest.

Categories: In the News

New study again proves Einstein right

MIT News - Mon, 07/16/2018 - 11:00

The universe should be a predictably symmetrical place, according to a cornerstone of Einstein’s theory of special relativity, known as Lorentz symmetry. This principle states that any scientist should observe the same laws of physics, in any direction, and regardless of one’s frame of reference, as long as that object is moving at a constant speed.

For instance, as a consequence of Lorentz symmetry, you should observe the same speed of light — 300 million meters per second — whether you are an astronaut traveling through space or a molecule moving through the bloodstream.

But for infinitesimally small objects that operate at incredibly high energies, and over vast, universe-spanning distances, the same rules of physics may not apply. At these extreme scales, there may exist a violation to Lorentz symmetry, or Lorentz violation, in which a mysterious, unknown field warps the behavior of these objects in a way that Einstein would not predict.

The hunt has been on to find evidence of Lorentz violation in various phenomena, from photons to gravity, with no definitive results. Physicists believe that if Lorentz violation exists, it might also be seen in neutrinos, the lightest known particles in the universe, which can travel over vast distances and are produced by cataclysmic high-energy astrophysical phenomena. Any confirmation that Lorentz violation exists would point to completely new physics that cannot be explained by Einstein’s theory.

Now MIT scientists and their colleagues on the IceCube Experiment have led the most thorough search yet of Lorentz violation in neutrinos. They analyzed two years of data collected by the IceCube Neutrino Observatory, a massive neutrino detector buried in the Antarctice ice. The team searched for variations in the normal oscillation of neutrinos that could be caused by a Lorentz-violating field. According to their analysis, no such abnormalities were observed in the data, which comprises the highest-energy atmospheric neutrinos that any experiment has collected.

The team’s results, published today in Nature Physics, rule out the possibility of Lorentz violation in neutrinos within the high energy range that the researchers analyzed. The results establish the most stringent limits to date on the existence of Lorentz violation in neutrinos. They also provide evidence that neutrinos behave just as Einstein’s theory predicts.

“People love tests of Einstein’s theory,” says Janet Conrad, professor of physics at MIT and a lead author on the paper. “I can’t tell if people are cheering for him to be right or wrong, but he wins in this one, and that’s kind of great. To be able to come up with as versatile a theory as he has done is an incredible thing.”

Conrad’s co-authors at MIT, who also led the search for Lorentz violation, are postdoc Carlos Argüelles and graduate student Gabriel Collin, who collaborated closely with Teppei Katori, a former postdoc in Conrad’s group who is now a lecturer in particle physics at Queen Mary University of London. Their co-authors on the paper include the entire IceCube Collaboration, comprising more than 300 researchers from 49 institutions in 12 countries.

Flavor change

Neutrinos exist in three main varieties, or as particle physicists like to call them, “flavors”: electron, muon, and tau. As a neutrino travels through space, its flavor can oscillate, or morph into any other flavor. The way neutrinos oscillate typically depends on a neutrino’s mass or the distance that it has traveled. But if a Lorentz-violating field exists somewhere in the universe, it could interact with neutrinos passing through that field, and affect their oscillations.

To test whether Lorentz violation can be found in neutrinos, the researchers looked to data gathered by the IceCube Observatory. IceCube is a 1-gigaton particle-detector designed to observe high-energy neutrinos produced from the most violent astrophysical sources in the universe. The detector is composed of 5,160 digital optical modules, or light sensors, each of which are attached to vertical strings that are frozen into 86 boreholes arrayed over a cubic kilometer of Antarctic ice.

Neutrinos streaming through space and the Earth can interact with the ice that comprises the detector or the bedrock below it. This interaction produces muons —charged particles that are heavier than electrons. Muons emit light as they go through the ice, producing long tracks that can go through the entire detector. Based on the recorded light, scientists can track the trajectory and estimate the energy of a muon, which they can use to back-calculate the energy — and expected oscillation— of the original neutrino.

The team, led by Argüelles and Katori, decided to look for Lorentz violation in the highest-energy neutrinos that are produced in the Earth’s atmosphere.

“Neutrino oscillations are a natural interferometer,” explains Katori. “Neutrino oscillations observed with IceCube act as the biggest interferometer in the world to look for the tiniest effects such as a space-time deficit.”

The team looked through two years of data gathered by IceCube, which comprised more than 35,000 interactions between a muon neutrino and the detector. If a Lorentz-violating field exists, the researchers theorized that it should produce an abnormal pattern of oscillations from neutrinos arriving at the detector from a particular direction, which should become more relevant as the energy increases. Such an abnormal oscillation pattern should correspond to a similarly abnormal energy spectrum for the muons.

The researchers calculated the deviation in the energy spectrum that they would expect to see if Lorentz violation existed, and compared this spectrum to the actual energy spectrum IceCube observed, for the highest-energy neutrinos from the atmosphere.

“We are looking for a deficit of muon neutrinos along the direction that traverses large fractions of the Earth,” Argüelles says. “This Lorentz violation-induced disappearance should increase with increasing energy.”

If Lorentz violation exists, physicists believe it should have a more obvious effect on objects at extremely high energies. The atmospheric neutrino dataset analyzed by the team is the highest-energy neutrino data collected by any experiment.

“We were looking to see if a Lorentz violation caused a deviation, and we didn’t see it,” Conrad says. “This closes the book on the possibility of Lorentz violation for a range of high-energy neutrinos, for a very long time.”

A violating limit

“This is a difficult analysis and takes into account effects that had not been considered before,” says Andre Luiz De Gouvea, a physics professor at Northwestern University, who was not involved in the research. “It is, as of right now, the most powerful result of its kind.”

The team’s results set the most stringent limit yet on how strongly neutrinos may be affected by a Lorentz-violating field. The researchers calculated, based on IceCube data, that a violating field with an associated energy greater than 10-36 GeV-2 should not affect a neutrino’s oscillations. That’s .01 with 35 more zeros preceding the 1, of one-billionth an electronvolt squared— an extremely small force that is far weaker than neutrinos’ normally weak interactions with the rest of matter, which is at the level of 10-5 GeV-2.

“We were able to set limits on this hypothetical field that are much, much better than any that have been produced before,” Conrad says. “This was an attempt to go out and look at new territory we hadn’t looked at before and see if there are any problems in that space, and there aren’t. But that doesn’t stop us from looking further.”

To that point, the group plans to look for Lorentz violation in even higher-energy neutrinos that are produced from astrophysical sources. IceCube does record astrophysical neutrinos, along with atmospheric ones, but scientists don’t have a complete understanding of their behavior, such as their normal oscillations. Once they can better model these interactions, Conrad says the team will have a better chance of looking for patterns that deviate from the norm.

“Every paper that comes out of particle physics assumes that Einstein is right, and all the rest of our work builds on that,” Conrad says. “And to a very good approximation, he’s correct. It is a fundamental fabric of our theory. So trying to understand whether there are any deviations to it is a really important thing to do.”

This research was supported, in part, by National Science Foundation.

Categories: In the News

A week of events in Cambridge, Somerville: Dance with Soul Clap, a jazz open mic, more

Cambridge Day - Mon, 07/16/2018 - 04:14
In a look ahead at a week of Cambridge and Somerville events, there's a dance party that culminates in a set by hometown heroes Soul Clap; a Jazzy Open Mic & Vocal Showcase; and science that looks at mind myths and microbiology behind making wine, kombucha and chocolate.
Categories: In the News

Sound waves reveal diamond cache deep in Earth’s interior

MIT News - Mon, 07/16/2018 - 00:00

There may be more than a quadrillion tons of diamond hidden in the Earth’s interior, according to a new study from MIT and other universities. But the new results are unlikely to set off a diamond rush. The scientists estimate the precious minerals are buried more than 100 miles below the surface, far deeper than any drilling expedition has ever reached.

The ultradeep cache may be scattered within cratonic roots — the oldest and most immovable sections of rock that lie beneath the center of most continental tectonic plates. Shaped like inverted mountains, cratons can stretch as deep as 200 miles through the Earth’s crust and into its mantle; geologists refer to their deepest sections as “roots.”

In the new study, scientists estimate that cratonic roots may contain 1 to 2 percent diamond. Considering the total volume of cratonic roots in the Earth, the team figures that about a quadrillion (1016) tons of diamond are scattered within these ancient rocks, 90 to 150 miles below the surface.   

“This shows that diamond is not perhaps this exotic mineral, but on the [geological] scale of things, it’s relatively common,” says Ulrich Faul, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “We can’t get at them, but still, there is much more diamond there than we have ever thought before.”

Faul’s co-authors include scientists from the University of California at Santa Barbara, the Institut de Physique du Globe de Paris, the University of California at Berkeley, Ecole Polytechnique, the Carnegie Institution of Washington, Harvard University, the University of Science and Technology of China, the University of Bayreuth, the University of Melbourne, and University College London.

A sound glitch

Faul and his colleagues came to their conclusion after puzzling over an anomaly in seismic data. For the past few decades, agencies such as the United States Geological Survey have kept global records of seismic activity — essentially, sound waves traveling through the Earth that are triggered by earthquakes, tsunamis, explosions, and other ground-shaking sources. Seismic receivers around the world pick up sound waves from such sources, at various speeds and intensities, which seismologists can use to determine where, for example, an earthquake originated.

Scientists can also use this seismic data to construct an image of what the Earth’s interior might look like. Sound waves move at various speeds through the Earth, depending on the temperature, density, and composition of the rocks through which they travel. Scientists have used this relationship between seismic velocity and rock composition to estimate the types of rocks that make up the Earth’s crust and parts of the upper mantle, also known as the lithosphere.

However, in using seismic data to map the Earth’s interior, scientists have been unable to explain a curious anomaly: Sound waves tend to speed up significantly when passing through the roots of ancient cratons. Cratons are known to be colder and less dense than the surrounding mantle, which would in turn yield slightly faster sound waves, but not quite as fast as what has been measured.   

“The velocities that are measured are faster than what we think we can reproduce with reasonable assumptions about what is there,” Faul says. “Then we have to say, ‘There is a problem.’ That’s how this project started.”

Diamonds in the deep

The team aimed to identify the composition of cratonic roots that might explain the spikes in seismic speeds. To do this, seismologists on the team first used seismic data from the USGS and other sources to generate a three-dimensional model of the velocities of seismic waves traveling through the Earth’s major cratons.

Next, Faul and others, who in the past have measured sound speeds through many different types of minerals in the laboratory, used this knowledge to assemble virtual rocks, made from various combinations of minerals. Then the team calculated how fast sound waves would travel through each virtual rock, and found only one type of rock that produced the same velocities as what the seismologists measured: one that contains 1 to 2 percent diamond, in addition to peridotite (the predominant rock type of the Earth’s upper mantle) and minor amounts of eclogite (representing subducted oceanic crust). This scenario represents at least 1,000 times more diamond than people had previously expected.

“Diamond in many ways is special,” Faul says. “One of its special properties is, the sound velocity in diamond is more than twice as fast as in the dominant mineral in upper mantle rocks, olivine.”

The researchers found that a rock composition of 1 to 2 percent diamond would be just enough to produce the higher sound velocities that the seismologists measured. This small fraction of diamond would also not change the overall density of a craton, which is naturally less dense than the surrounding mantle.

“They are like pieces of wood, floating on water,” Faul says. “Cratons are a tiny bit less dense than their surroundings, so they don’t get subducted back into the Earth but stay floating on the surface. This is how they preserve the oldest rocks. So we found that you just need 1 to 2 percent diamond for cratons to be stable and not sink.”

In a way, Faul says cratonic roots made partly of diamond makes sense. Diamonds are forged in the high-pressure, high-temperature environment of the deep Earth and only make it close to the surface through volcanic eruptions that occur every few tens of millions of years. These eruptions carve out geologic “pipes” made of a type of rock called kimberlite (named after the town of Kimberley, South Africa, where the first diamonds in this type of rock were found). Diamond, along with magma from deep in the Earth, can spew out through kimberlite pipes, onto the surface of the Earth.

For the most part, kimberlite pipes have been found at the edges of cratonic roots, such as in certain parts of Canada, Siberia, Australia, and South Africa. It would make sense, then, that cratonic roots should contain some diamond in their makeup.  

“It’s circumstantial evidence, but we’ve pieced it all together,” Faul says. “We went through all the different possibilities, from every angle, and this is the only one that’s left as a reasonable explanation.”

This research was supported, in part, by the National Science Foundation. 


Categories: In the News

New materials improve delivery of therapeutic messenger RNA

MIT News - Mon, 07/16/2018 - 00:00

In an advance that could lead to new treatments for a variety of diseases, MIT researchers have devised a new way to deliver messenger RNA (mRNA) into cells.

Messenger RNA, a large nucleic acid that encodes genetic information, can direct cells to produce specific proteins. Unlike DNA, mRNA is not permanently inserted into a cell’s genome, so it could be used to produce a therapeutic protein that is only needed temporarily. It can also be used to produce gene-editing proteins that alter a cell’s genome and then disappear, minimizing the risk of off-target effects.

Because mRNA molecules are so large, researchers have had difficulty designing ways to efficiently get them inside cells. It has also been a challenge to deliver mRNA to specific organs in the body. The new MIT approach, which involves packaging mRNA into polymers called amino-polyesters, addresses both of those obstacles.

“We are excited by the potential of these formulations to deliver mRNA in a safe and effective manner,” says Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES).

Anderson is the senior author of the paper, which appears in the journal Advanced Materials. The paper’s lead authors are MIT postdoc Piotr Kowalski and former visiting graduate student Umberto Capasso Palmiero of Politecnico di Milano. Other authors are research associate Yuxuan Huang, postdoc Arnab Rudra, and David H. Koch Institute Professor Robert Langer.

Polymer control

Cells use mRNA to carry protein-building instructions from DNA to ribosomes, where proteins are assembled. By delivering synthetic mRNA to cells, researchers hope to be able to stimulate cells to produce proteins that could be used to treat disease. Scientists have developed some effective methods for delivering smaller RNA molecules, and a number of these materials have shown potential in clinical trials.

The MIT team decided to package mRNA into new polymers called amino-polyesters. These polymers are biodegradable, and unlike many other delivery polymers, they do not have a strong positive charge, which may make them less likely to damage cells.

To create the polymers, the researchers used an approach that allows them to control the properties of the polymer, such as its molecular weight. This means that the quality of the polymers produced will be the similar in each batch, which is important for clinical transition and often not the case with other polymer synthesis methods.

“Being able to control the molecular weight and the properties of your material helps to be able to reproducibly make nanoparticles with similar qualities, and to produce carriers starting from building blocks that are biocompatible could reduce their toxicity,” Capasso Palmiero says.

“It makes clinical translation much harder if you don’t have control over the reproducibility of the delivery system and the released degradation products, which is a challenge for polymer-based nucleic acid delivery,” Kowalski says.

For this study, the researchers created a diverse library of polymers that varied in the composition of amino-alcohol core and the lactone monomers. The researchers also varied the length of polymer chains and the presence of carbon atom side chains in the lactone subunits.

After creating about three dozen different polymers, the researchers combined them with lipids, which help stabilize the particles, and encapsulated mRNA within the nanoparticles.

In tests in mice, the researchers identified several particles that could effectively deliver mRNA to cells and induce the cells to synthesize the protein encoded by the mRNA. To their surprise, they also found that several of the nanoparticles appeared to preferentially accumulate in certain organs, including the liver, lungs, heart, and spleen. This kind of selectivity may allow researchers to deliver specific therapies to certain locations in the body.

“It is challenging to achieve tissue-specific mRNA delivery,” says Yizhou Dong, an associate professor of pharmaceutics and pharmaceutical chemistry at Ohio State University, who was not involved in the research. “The findings in this report are very exciting and provide new insights on chemical features of polymers and their interactions with different tissues in vivo. These novel polymeric nanomaterials will facilitate systemic delivery of mRNA for therapeutic applications.”

Targeting disease

The researchers did not investigate what makes different nanoparticles go to different organs, but they hope to further study that question. Particles that specifically target different organs could be very useful for treating lung diseases such as pulmonary hypertension, or for delivering vaccines to immune cells in the spleen, Kowalski says. Another possible application is using the particles to deliver mRNA encoding the proteins required for the genome-editing technique known as CRISPR-Cas9, which can make permanent additions or deletions to a cell’s genome.

Anderson’s lab is now working in collaboration with researchers at the Polytechnic University of Milan on the next generation of these polymers in hopes of improving the efficiency of RNA delivery and enhancing the particles’ ability to target specific organs.

“There is definitely a potential to increase the efficacy of these materials by further modifications, and also there is potential to hopefully find particles with different organ-specificity by extending the library,” Kowalski says.

The research was funded by the U.S. Defense Advanced Research Projects Agency and the Progetto Roberto Rocca.

Categories: In the News

With data showing racial divide in housing, issue of discrimination due for examination

Cambridge Day - Sun, 07/15/2018 - 03:58
The issue of housing discrimination in Cambridge is coming under unaccustomed scrutiny as the City Council takes steps to update the city’s 34-year-old fair housing ordinance and questioned how well the city’s Human Rights Commission deals with housing bias.
Categories: In the News

Concerns about afterlife in a life with Trump, when America feels more like hell for many

Cambridge Day - Sat, 07/14/2018 - 20:16
Our devouring of dystopian books such as “The Handmaid’s Tale” is a search for answers to a frightening new normal – and so is the steady stream of queries about the afterlife coming ministers’ way in the era of President Donald Trump.
Categories: In the News

Attend meetings on developments citywide; vacant storefronts, bridge work get looks too

Cambridge Day - Sat, 07/14/2018 - 18:48
Public meetings this week look at issues such as a potentially 500-unit apartment building in Cambridge Crossing; the 300 units of affordable housing at Millers River in East Cambridge; the Envision Cambridge citywide master planning process; and Porter Square's vacant storefronts.
Categories: In the News

Two men shot in arms in separate incidents early Saturday, in Port and East Cambridge

Cambridge Day - Sat, 07/14/2018 - 13:41
Police are investigating two separate incidents early this morning in which men were shot in the arm. One incident was in The Port neighborhood, scene of a Wednesday peace rally.
Categories: In the News