Yaaree Sort of ‘Science’ Category


milky-way-over-sycamore-gap-in-the-northumberland-international-dark-sky-park-credit-alan-wallace. [downloaded with 1stBrowser]

The Milky Way – our own, sometimes very visible galaxy – seen over Northumberland International Dark Sky Park Alan Wallace

Scientists have found a “ghost” galaxy – roughly the same mass as our own, but entirely made up of dark matter.

Dragonfly 44 is almost entirely made up of dark matter, the mysterious – and for now mostly theoretical – stuff that makes up 27 per cent of the universe but has never actually been seen.

Though the galaxy is relatively nearby, at least in the scale of the universe, it is so dark that scientists completely missed it for decades.

But it was finally spotted last year. It sits in the Coma galaxy cluster, about 330 million light years from us.

When scientists looked at it further, they found that it was not just a normal set of stars – but instead a ghost, made up of dark matter. Though it has about the same mass as our own Milky Way galaxy, only one hundredth of one per cent is made of up of the normal matter like stars, dust and gas that surrounds us.

Rather, it is 99.99 per cent made up of dark matter. Nobody knows what exactly that is, how it came about – or even how a galaxy could have arisen that looked that way.

Dragonfly 44 does have some normal stars of its own. But our Galaxy has a hundred times more stars than are there.

Astronomers found out about the strange ghost galaxy by looking at the movement of the galaxy’s stars – movement that seemed to be influenced by matter that doesn’t by normal measures exist.

Professor Pieter van Dokkum, a member of the team from Yale University in the US, said: “Motions of the stars tell you how much matter there is. They don’t care what form the matter is, they just tell you that it’s there.

“In the Dragonfly galaxy, stars move very fast. So there was a huge discrepancy.

“We found many times more mass indicated by the motions of the stars than there is mass in the stars themselves.”

Scientists know that there must be something providing the gravity that is needed to hold the galaxy together. But the mass that would normally provide that isn’t there.

Scientists from the Keck Observatory in Hawaii found the galaxy, and report their findings in the Astrophysical Journal Letters.

They said that there may be many more of the strange, ghost galaxies waiting to be found.

 Co-author Professor Roberto Abraham, from the University of Toronto in Canada, said: “We have no idea how galaxies like Dragonfly 44 could have formed.

“The … data show that a relatively large fraction of the stars is in the form of very compact clusters, and that is probably an important clue. But at the moment we’re just guessing.”

Dark matter remains perhaps the biggest mystery of the universe. While scientists know that it must exist – the calculations that account for the make-up of the universe require it – we’ve never actually seen it, and attempts to do so have failed.

But the discovery could let us finally find more about the mysterious stuff that surrounds us.

“Ultimately what we really want to learn is what dark matter is,” said Dr Van Dokkum.

“The race is on to find massive dark galaxies that are even closer to us than Dragonfly 44, so we can look for feeble signals that may reveal a dark matter particle.”

Only 5 per cent of the interchangeable mass-energy of the universe is made up of the kind of normal matter that we can see and touch. Dark matter makes up a large part of the rest.

Despite the fact that it constitutes 27 per cent of the universe, it doesn’t reflect light and can’t be seen by any means so far. Experiments to understand it usually require doing so through other means – but even they have often failed.

The remaining part of the universe is made by something even more confusing. Dark energy makes up 68 per cent of the universe, and is a kind of anti-gravitational force that is pushing galaxies apart, more and more quickly.

, , , , , , , ,


080516_ls_neural-dust_free. [downloaded with 1stBrowser]

SPECK WORK  A crystal that vibrates in response to ultrasound allowed this tiny sensor (shown on a fingertip) to pick up signals from rat nerves and muscles.

A small device with a heart of crystal can eavesdrop on muscles and nerves, scientists report August 3 in Neuron. Called neural dust, the device is wireless and needs no batteries, appealing attributes for scientists seeking better ways to monitor and influence the body and brain.

“It’s certainly promising,” says electrical engineer Khalil Najafi of the University of Michigan in Ann Arbor. “They have a system that operates, and operates well.”

Michel Maharbiz of the University of California, Berkeley and colleagues presented their neural dust idea in 2013. But the paper in Neuron represents the first time the system has been used in animals. Neural dust detected activity when researchers artificially stimulated rats’ sciatic nerves and muscles.

Unlike other devices that rely on electromagnetic waves, neural dust is powered by ultrasound. When hit with ultrasound generated by a source outside the body, a specialized crystal begins to vibrate. This mechanical motion powers the system, allowing electrodes to pick up electrical activity. This activity can then change ultrasound signals that travel back to the source, offering a readout in a way that’s similar to a sonar measurement.

Neural dust devices may help scientists avoid some of the problems with current implants, such as a limited life span. Implantable devices can falter in the brain’s hostile environment. “It’s like throwing a piece of electronics in the ocean and wanting it to run for 20 years,” Maharbiz says. “Eventually things start to degrade and break down.” But having a simple, small device may increase the life span of such implants — although Maharbiz and colleagues don’t yet know how long their system could last.

What’s more, the brain can mount a defense against the foreign object, which can result in thick tissue surrounding the implant. Smaller systems damage the brain less. At over 2 millimeters long and just under 1 millimeter wide, a particle of the neural dust described in the paper is larger than most actual specks of dust. But the system is still shrinking. “There’s a lot of room here to just really push it, and that’s what excites us,” Maharbiz says. “You can keep getting smaller and smaller and smaller.”

Neural dust could ultimately be used to detect different sorts of data in the body, not just electrical activity, Maharbiz says. The device could be tweaked to sense temperature, pressure, oxygen or pH.

Najafi cautions that it remains to be seen whether the system will prove useful for listening to nerve cell behavior inside the brain. The system would need to include many different pieces of neural dust, and it’s not clear how effective that would be. “It’s a lot harder than the notion of dust implies,” he says.

, , , , , , , , ,


160811143528_1_540x360. [downloaded with 1stBrowser]

In this photo of living mouse neurons, calcium imaging techniques record the firing of individual neurons and their pulses of electricity.
Credit: Yuste Laboratory/Columbia University

Neurons that fire together really do wire together, says a new study in Science, suggesting that the three-pound computer in our heads may be more malleable than we think.

In the latest issue of Science, neuroscientists at Columbia University demonstrate that a set of neurons trained to fire in unison could be reactivated as much as a day later if just one neuron in the network was stimulated. Though further research is needed, their findings suggest that groups of activated neurons may form the basic building blocks of learning and memory, as originally hypothesized by psychologist Donald Hebb in the 1940s.

“I always thought the brain was mostly hard-wired,” said the study’s senior author, Dr. Rafael Yuste, a neuroscience professor at Columbia University. “But then I saw the results and said ‘Holy moly, this whole thing is plastic.’ We’re dealing with a plastic computer that’s constantly learning and changing.”

The researchers were able to control and observe the brain of a living mouse using the optogenetic tools that have revolutionized neuroscience in the last decade. They injected the mouse with a virus containing light-sensitive proteins engineered to reach specific brain cells. Once inside a cell, the proteins allowed researchers to remotely activate the neuron with light, as if switching on a TV.

The mouse was allowed to run freely on a treadmill while its head was held still under a microscope. With one laser, the researchers beamed light through its skull to stimulate a small group of cells in the visual cortex. With a second laser, they recorded rising levels of calcium in each neuron as it fired, thus imaging the activity of individual cells.

Before optogenetics, scientists had to open the skull and implant electrodes into living tissue to stimulate neurons with electricity and measure their response. Even a mouse brain of 100 million neurons, nearly a thousandth the size of ours, was too dense to get a close look at groups of neurons.

Optogenetics allowed researchers to get inside the brain non-invasively and control it far more precisely. In the last decade, researchers have restored sight and hearing to blind and deaf mice, and turned normal mice aggressive, all by manipulating specific brain regions.

The breakthrough that allowed researchers to reprogram a cluster of cells in the brain is the culmination of more than a decade of work. With tissue samples from the mouse visual cortex, Yuste and his colleagues showed in a 2003 study in Nature that neurons coordinated their firing in small networks called neural ensembles. A year later, they demonstrated that the ensembles fired off in sequential patterns through time.

As techniques for controlling and observing cells in living animals improved, they learned that these neural ensembles are active even without stimulation. They used this information to develop mathematical algorithms for finding neural ensembles in the visual cortex. They were then able to show, as they had in the tissue samples earlier, that neural ensembles in living animals also fire one after the other in sequential patterns.

The current study in Science shows that these networks can be artificially implanted and replayed, says Yuste, much as the scent of a tea-soaked madeleine takes novelist Marcel Proust back to his memories of childhood.

Pairing two-photon stimulation technology with two-photon calcium imaging allowed the researchers to document how individual cells responded to light stimulation. Though previous studies have targeted and recorded individual cells none have demonstrated that a bundle of neurons could be fired off together to imprint what they call a “neuronal microcircuit” in a live animal’s brain.

“If you told me a year ago we could stimulate 20 neurons in a mouse brain of 100 million neurons and alter their behavior, I’d say no way,” said Yuste, who is also a member of the Data Science Institute. “It’s like reconfiguring three grains of sand at the beach.”

The researchers think that the network of activated neurons they artificially created may have implanted an image completely unfamiliar to the mouse. They are now developing a behavioral study to try and prove this.

“We think that these methods to read and write activity into the living brain will have a major impact in neuroscience and medicine,” said the study’s lead author, Luis Carrillo-Reid, a postdoctoral researcher at Columbia.

Dr. Daniel Javitt, a psychiatry professor at Columbia University Medical Center who was not involved in the study, says the work could potentially be used to restore normal connection patterns in the brains of people with epilepsy and other brain disorders. Major technical hurdles, however, would need to be overcome before optogenetic techniques could be applied to humans.

The research is part of a $300 million brain-mapping effort called the U.S. BRAIN Initiative, which grew out of an earlier proposal by Yuste and his colleagues to develop tools for mapping the brain activity of fruit flies to more complex mammals, including humans.

, , , , , , , , , , ,


PRECIOUS METAL  Geochemists explore platinum, gold and other rare elements that are attracted to iron to understand how Earth’s core formed billions of years ago.

Four and a half billion years ago, after Earth’s fiery birth, the infant planet began to radically reshape itself, separating into distinct layers. Metals — mostly iron with a bit of nickel — fell toward the center to form a core. The growing core also vacuumed up other metallic elements, such as platinum, iridium and gold.

By the time the core finished forming, about 30 million years later, it had sequestered more than 98 percent of these precious elements. The outer layers of the planet — the mantle and the crust — had barely any platinum and gold left. That’s why these metals are so rare today.

Battles have been fought, and wars won, over the pull of shiny precious metals, which have long symbolized power and influence. But for scientists, the rare metals’ lure is less about their shimmering beauty than about the powerful stories they can tell about how the Earth, the moon and other planetary bodies formed and evolved.

By analyzing rare primordial materials, researchers are uncovering geochemical fingerprints that have survived essentially unchanged over billions of years. These fingerprints allow scientists to compare Earth rocks with moon rocks and test ideas about whether giant meteorites once dusted the inner solar system with extraterrestrial platinum and gold. Such research can help scientists learn how volatiles such as water may have spread, leaving some worlds water-rich and others bone-dry.

These explorations, motivated by a growing appreciation of what such rare metals reveal about Earth’s history, are now possible thanks to new analytical techniques. “They give us a window into all kinds of processes that we want to understand,” says Richard Walker, a geochemist at the University of Maryland in College Park.

The highly siderophile elements

Eight elements that are very much attracted to iron.


Geochemical memory

Platinum and gold are among eight occupants of the periodic table belonging to the category known as the highly siderophile elements. That name dates back to the 1920s, when Victor Goldschmidt, a mineralogist at the University of Oslo, divided the elements into groups depending on what they liked to combine with in nature. His four classifications are still used today: the lithophiles (rock-lovers), the chalcophiles (ore- or sulfur-lovers), the atmophiles (gas-lovers) and siderophiles, the iron-lovers.

The siderophile elements tend not to ally themselves with the oxygen- and silicon-based compounds that form the bulk of Earth’s crust. They form dense alloys with iron instead. One such element, tungsten (symbolized by W in the periodic table), is an iron-lover that has been important in recent scientific studies of Earth’s geologic history. A step beyond tungsten are those highly siderophile elements, which are even bigger fans of iron. They are ruthenium, rhodium, palladium, rhenium, osmium and iridium along with platinum and gold.

Because highly siderophile elements are relatively abundant in the core and scarce in the mantle and crust, they help scientists trace how Earth’s insides have evolved over time. Dig up a rock from deep within a mine, or pick up one from a freshly erupted volcano, and you can measure the siderophile elements within. The measurements might show whether a radioactive version of one such element has decayed into another, or whether the rock has higher amounts of one particular variety of siderophile. In turn, that information can reveal how material has shifted around and been chemically processed deep within the planet.

Fans of Fe

Eight chemical elements, known as the highly siderophile elements, are preferentially drawn to iron when molten. Most of them have relatively high melting points and resist being corroded or oxidized. Along with the siderophile tungsten, they serve as powerful tracers for how Earth’s interior separated into layers billions of years ago.


By analyzing the iron-lovers within each rock, scientists can probe what the rock has been doing for billions of years. “We can trace the entire evolutionary process of how a planet formed,” says James Day, a geochemist at the Scripps Institution of Oceanography in La Jolla, Calif. “That’s why someone like me is interested.”

For instance, Walker and his colleagues have explored siderophile elements in some of the oldest rocks on Earth. Through the process of plate tectonics, in which plates of Earth’s crust grind against, pull apart from and occasionally dive beneath one another, most ancient rocks have been dragged back into the planet and destroyed by melting. But in southwestern Greenland, in a place called Isua, a chunk of ancient crust never got pulled down by plate tectonics (SN: 3/24/07, p. 179). Walker and colleagues, led by Hanika Rizo of the University of Quebec in Montreal, recently studied siderophile elements in these 3.3-billion- to 3.8-billion-year-old rocks.

The scientists looked at the abundance of highly siderophile elements in the Greenland rocks but found that, in this case, the biggest clues came from the slightly less iron-loving tungsten. The rocks contain more of one variety of tungsten, known as tungsten-182, than expected. That isotope forms from the radioactive decay of hafnium-182, which existed only during Earth’s first 50 million years. The Greenland rocks thus serve as a sort of time capsule that helps reveal the history of the early solar system, Rizo, Walker and colleagues wrote in February in Geochimica et Cosmochimica Acta.

“We believe we are accessing parts of Earth’s mantle that formed and took on some of their chemical characteristics while the Earth was still growing,” Walker says. “You can call it accessing a building block of the Earth.”

Studies of these remnants of the ancient planet suggest that Earth’s mantle has remained chemically patchy. Like lumps of flour in poorly mixed cake batter, clumps of primordial material, with higher amounts of tungsten-182, are studded throughout a smoother, more evenly mixed matrix. That’s surprising because researchers thought that the hot, churning insides of the Earth would have stirred everything around over the course of billions of years. Somehow portions of the mantle resisted the planet’s best blending efforts, Walker reported in June at the Goldschmidt geochemistry meeting in Yokohama, Japan.

By studying where those patches are and what they are made of, researchers can investigate such questions as how much convection there was inside the early Earth, and whether any volcanoes today tap into this primordial material. In May, for instance, Walker’s team reported that it had used siderophile elements to identify geochemically primitive lavas in Canada’s Baffin Bay and in the South Pacific (SN: 6/11/16, p. 13).

Ancient rocks in Isua, Greenland, date back to more than 3.8 billion years ago. Siderophile elements in these rocks bear witness to geological processes in the planet’s first 50 million years.


Like the ancient Greenland crust, these rocks also had an overabundance of tungsten-182. Apparently the Canadian and Pacific volcanoes tapped into a deep reservoir of primordial material, which flowed up through the throat of a volcano and out onto the surface. Studying the iron-loving elements in those rocks is like taking a siderophile time machine into the past and seeing what the Earth was like 4.5 billion years ago.

“It never ceases to amaze me what the rocks can tell,” says Amy Riches, a geochemist at Durham University in England.

A dusting from space

Highly siderophile elements can teach about more than just the planet Earth. They can reveal secrets about the history of the moon, Mars and other nearby planetary bodies. That’s because all the worlds in the inner solar system apparently got a dusting of gold, platinum and other highly siderophile elements during meteorite bombardments around 4 billion years ago.

The early solar system was something of a cosmic shooting gallery. After the planets coalesced, there were still a lot of leftover space rocks careening around. One enormous impact is thought to have smashed the Earth and spalled off enough debris to form the moon. Other, smaller impacts continued to pummel the inner planets for the first half-billion years or so of their existence. Each collision would have brought a little more fresh material to each world.

On Earth, meteorite impacts could have delivered half a percent to 1 percent of the planet’s total mass. Many meteorites that fall to Earth and are analyzed contain relatively high amounts of highly siderophile elements, which suggests that meteorites hitting the early Earth would have carried a lot of them, too. If so, then the cosmic smashups regularly showered Earth with a fresh coating of gold, platinum and other precious elements. By this time, Earth had already finished forming its core, so the highly siderophile elements remained sprinkled throughout its upper layers rather than being vacuumed into its depths.

This “late accretion” of fresh material could help explain a long-standing puzzle. The amounts of highly siderophile elements in Earth’s mantle are higher than predicted, according to laboratory experiments that try to mimic how molten metal separated from rock as Earth was forming. But a shower of meteorites hitting soon after core formation stopped could have done the trick, a process that Day, Walker and Alan Brandon of the University of Houston discuss in the January Reviews in Mineralogy & Geochemistry.

A fresh dusting

Some 4.5 billion years ago, as Earth’s core was solidifying within the newborn molten planet (1), the highly siderophile elements were drawn in to the iron-rich core (2). Later, meteorites pummeling the planet may have brought a fresh dusting of these rare metals (3).


Not everyone accepts the late accretion idea. Some scientists, including Kevin Righter at NASA’s Johnson Space Center in Houston, note that siderophile elements become less iron-loving when squeezed at high pressures and temperatures. That could mean fewer of them dived deep into Earth’s core, and more of them would be left behind for the mantle and the crust. No need for an express meteorite delivery.

Debate probably won’t end anytime soon, as various laboratory experiments seem to support both conclusions. “People are still hacking away at trying to understand this,” says James Brenan, a geochemist at Dalhousie University in Halifax, Canada. Clarity is important for getting to the heart of what the highly siderophile elements can tell scientists — where they came from, how they separated out within the primordial Earth, and what they have been doing since then.

Less precious moon

Another big unanswered question is why the Earth and the moon seem to be so different from each other when it comes to highly siderophile elements.

Researchers have a very limited sample of moon rocks to study — just those brought back by the Apollo astronauts, and a few lunar meteorites that happened to fall on Earth and were picked up. None of these samples come from the moon’s deep interior. But by extrapolating from the chemistry of the rocks they have in hand, researchers have calculated that the moon’s mantle has surprisingly lower amounts of the highly siderophile elements than Earth’s mantle — just about 2 percent as much.

If the late-accretion idea is right, then both Earth and the moon should have been dusted by the same meteoritic bombardment of gold, platinum and other elements, and they should have similar amounts of highly siderophile elements in their mantles. That doesn’t seem to be the case. The explanation may lie partly in the fact that the moon is a lot smaller than the Earth, Day and Walker noted last year in Earth and Planetary Science Letters.

Think of the meteorite bombardment as someone throwing snowballs at a pair of very different-sized dogs, Day says. “The statistical chance of these snowballs colliding with a Rottweiler are much higher than with a Chihuahua,” he says. In other words, Earth acquired more platinum and gold simply because it is so much larger than the moon. Both went through the same snowball bombardment, but the bigger object collected more snow coating.

As with most things geochemical, there is another possible explanation. The moon does not have a core that would have sucked highly siderophile elements into its interior. But it’s possible that something else could be holding the gold and platinum deep within the moon, Brenan says. That something is sulfur.

The iron-lovers are also sulfur-likers. In the absence of metal, highly siderophile elements tend to clump with sulfur instead. By studying the interplay between the two, geochemists can start to tease out how the various elements behave as rocks are squeezed, melted and otherwise altered over billions of years of geologic history.

In recent laboratory experiments, Brenan mixed up a recipe of rock meant to simulate the lunar mantle. Earlier work had suggested that there was simply not enough sulfur deep in the moon for iron sulfide to be present. But his work, which used a more realistic representation of the lunar mantle, suggests that iron sulfide can indeed exist and be stable there. That iron sulfide would have kept the iron-lovers deep inside the moon — trapping the highly siderophile elements out of sight.

Under pressure

Lab experiments at high pressures, meant to simulate the moon’s interior, show different patterns of iron sulfide crystals in mixtures rich and poor in iron (top, 96 percent iron; bottom, 25 percent iron). Iron sulfide could have trapped sidero­phile elements deep within the moon, explaining why their lunar abundance differs from Earth’s.


The sulfur work may have even broader implications for understanding how the Earth, moon and other worlds in the inner solar system got their water. Both sulfur and water are relatively volatile compounds that often appear together. Researchers thought both had been lost from the moon long ago. After all, the lunar surface today is dry and barren. But in recent years, scientists have been analyzing droplets of melt in lunar rocks and have found surprisingly high amounts of sulfur and water. That indicates that the moon may once have been wetter than thought. “That has really changed our thinking,” Brenan says.

By looking at the concentration of these elements in lunar rocks, geochemists can cross-check their measurements of sulfur and water — and begin to understand the differences between Earth and the moon.

Still searching

At the University of Münster in Germany, geochemist Mario Fischer-Gödde has been working to pull together the various threads of what highly siderophile elements can reveal. Many researchers have suggested that Earth may have gotten much of its water and other volatile elements during the meteorite bombardment of the late accretion. So Fischer-Gödde is systematically analyzing different types of meteorites found on Earth to see if they could have actually delivered these volatiles.

He focuses on the element ruthenium. Like the other highly siderophile elements, it probably arrived on Earth aboard meteorites during the late accretion. Weirdly, though, none of the dozens of meteorites Fischer-Gödde has analyzed contain ruthenium isotopes that match those found in the mantle. He concludes that none of the meteorite types found on Earth so far could be the source of the late accretion materials. Some other source — maybe other rocky bits that were flying around the inner solar system — must have brought ruthenium and other siderophiles to Earth, he reported at the Durham workshop.

And that means the highly siderophile elements still have many mysteries to reveal, and there’s plenty of work to be done. With new ever-more-sensitive techniques under development — such as scans that reveal individual atoms of highly siderophile elements within small grains of metal — researchers are pushing forward in their efforts to analyze the siderophile elements, hoping to squeeze more stories of Earth’s beginning from the discreet iron-lovers.

Continue Reading…

, , , , , , ,

Meg Ryan and Billy Crystal

Download Image

Don’t tell Meg Ryan and Billy Crystal (above), but instead of having an evolutionary purpose, the pleasure of a female orgasm may be a vestige of how ovulation was induced in ancestral mammals.

Moviestore collection Ltd/Alamy Stock Photo

Billy Crystal may have been shocked when Meg Ryan so effectively—and amusingly—faked an orgasm in a restaurant during the 1989 movie When Harry Met Sally, but surveys suggest only one-third of women are regularly fully aroused during intercourse. And although poor partner performance, psychological issues, or physiological shortfalls are often cited as the reason, two evolutionary biologists now offer a provocative new explanation. In a paper published today, they argue that female orgasm is an evolutionary holdover from an ancient system, seen in some other mammals, in which intercourse stimulated important hormonal surges that drive ovulation.

Humans and other primates don’t need intercourse to trigger ovulation—they evolved to a point where it happens on its own—but the hormonal changes accompanying intercourse persist and fuel the orgasms that make sex more enjoyable, the biologists hypothesize. And because those hormonal surges no longer confer a biological advantage, orgasms during intercourse may be lost in some women. This explanation “takes away a lot of stigma” of underwhelming sexual relations, says one of the authors, Mihaela Pavlićev, of Cincinnati Children’s Hospital in Ohio.

The new work addresses what David Puts, a biological anthropologist at Pennsylvania State University, University Park, calls “one of the most contentious questions in the study of the evolution of human sexuality: whether women’s orgasm has an evolutionary function.” There are more than a dozen theories about the evolution of orgasms, most proposed decades or more ago. They include arguments that women have orgasms because their reproductive machinery has the same origins as those of men, who need to have orgasms to ejaculate sperm. Others think orgasms are an evolutionary novelty that persists because it helps foster loyal partners. Some have proposed that female orgasms induce physiological changes that increase the chances of conception, but there’s no strong evidence that women who have more have increased fecundity.

Orgasm itself may have no evolutionary function, but it is derived from a key part of the reproductive cycle, Pavlićev and her colleague propose today in theJournal of Experimental Zoology Part B: Molecular and Developmental Evolution. Pavlićev didn’t start out studying orgasms. To better understand the evolution of reproduction, she was compiling data on the ovarian cycle in different mammal species. During this cycle, cells destined to become eggs mature, escape from the ovary, and travel down the reproductive tract. She discovered that in some species, environmental factors control egg maturation and subsequent ovulation; in others, such as rabbits, sexual intercourse with a male or even just his presence causes the release of the egg. In either case, a series of changes involving the hormones oxytocin and prolactin are triggered that cause the egg to mature and migrate. In humans and other primates, the ovulatory cycle has become spontaneous, generally on a set schedule that requires neither an environmental trigger nor a male. Pavlićev then realized that women still undergo the same hormonal changes as species with induced ovulation, but during orgasm.

To see whether induced ovulation was the evolutionary predecessor of orgasms—in a similar way that fins were ancestral to limbs—she and Günter Wagner, an evolutionary biologist from Yale University, first needed to see whether induced ovulation predated spontaneous ovulation in evolutionary history. Their literature search showed that environmental- and male-induced ovulation are found in earlier evolving mammals and spontaneous ovulation appears in later species, including our own. They also noticed another change. In earlier mammals, the clitoris, which is so often key to a woman’s orgasm, tends to be part of the vagina—guaranteeing that intercourse stimulated this organ and kick-started ovulation. But in later arising species, particularly primates, the clitoris has moved ever farther away from the vagina, even out of reach of an inserted penis. “A shift in the position of the clitoris is correlated with the loss of intercourse-induced ovulation,” says Martin Cohn, an evolutionary developmental biologist at the University of Florida in Gainesville. “Their hypothesis shifts the focus of the research question from the evolutionary origin of orgasm as an evolutionary novelty, which has long been presumed but not demonstrated, to the evolutionary modification of an ancestral character.”

Pavlićev and Wagner’s theory helps explain why female orgasms during intercourse are relatively rare. “It is new to use [this] innovative, Darwinian approach to understand one of the mysteries of human sexuality—why the male orgasm is warranted, easy-to-reach, and strictly related to reproduction and the female counterpart [is] absolutely not,” says Emmanuele Jannini, an endocrinologist at University of Rome Tor Vergata. The nonnecessity of orgasms for reproduction may also explain why women’s reproductive tracts vary a lot more than men’s—there are fewer constraints, she adds.

Jannini and others point out, however, that this theory needs more confirmation. So far, it deals only with the parallels between the hormonal surges in females during male-induced ovulation and orgasm, but has not looked to see whether there are also parallels in the neurological components of these activities, Lloyd says. And because it’s so difficult to assess whether other mammals feel the pleasure associated with orgasms, the work can only ever address the evolution of some of the components of female orgasm, Puts notes.

Others more strongly criticized the new explanation. Two behavioral neuroendocrinologists, Michael Baum from Boston University and Kim Wallen from Emory University in Atlanta, tell Science that Pavlićev and Warner misinterpret some previously published results and do not have the details about the hormonal changes during ovulation and orgasm correct. “Their hypothesis remains a good hypothesis,” Wallen says. “But I’m not very convinced by the data they marshal.”

Lloyds says the work drives home how much more we need to learn about female sexuality in other organisms. Wagner and Pavlićev concede that more data are needed to firm up their theory, though for now they have no plans to follow up themselves. Cohn predicts others will pick up the baton. “Pavlićev and Warner have taken a fascinating, creative, and thoughtful approach to a problem that has been investigated by many but resolved by few,” he says. “I suspect that many investigators will be stimulated to further test the hypotheses raised in this paper.”

, , , , , , , , , , , , ,



rhone_glacier. [downloaded with 1stBrowser]
The Rhône Glacier in Switzerland
Glaciers in the European Alps are vanishing, transforming landscapes and leaving communities without an essential source of water. Scientists have a fix, but it won’t last forever.
Alpine winters bring snow and rain to fill streams and rivers. In the summer, when rainfall tapers off, melting glaciers feed waterways. Like most humans, glaciers gain weight over the holidays and lose that weight in the summer. In especially cool years they lose a little extra, and in especially hot years they gain a little more. Over time, losses and gains tend to even out, though global warming is changing that.

old_glacier. [downloaded with 1stBrowser]
The Rhône Glacier circa 1900

old_glacier. [downloaded with 1stBrowser]
The Rhône Glacier in 2008
Adrian Michael
The Rhône Glacier in 2008
Rising temperatures have disrupted the cycle of growth and melt. Glaciers are now shedding ice faster than they can accumulate it. This is bad news for the Swiss Alps, sometimes known as ‘Europe’s water tower’. Shrinking glaciers will provide less water each year. And, because spring will arrive earlier as a result of climate change, glaciers will provide more water at the end of winter and less water during the summer, when it is needed most.
In a recent study, scientists proposed using dams to collect water from melting glaciers. Dams would store surplus runoff in the spring and save if for the summer. If engineers built dams large enough to hold a combined one cubic kilometer of water, that could save enough water to offset two-thirds of the summertime shortage expected with climate change.

dam. [downloaded with 1stBrowser]
A dam collects water supplied by the Gries glacier.Download Image
Hurni Christoph
A dam collects water supplied by the Gries glacier.
In practice, this becomes more difficult. Switzerland’s 4,000 glaciers are disperse and many of them hard to reach. And, unfortunately, dams would only provide a temporary fix. They would merely addresses the seasonal shift in glacial melt and not the long-term disappearance of glaciers. For that, scientists will need to come up with a more permanent solution.
Jeremy Deaton writes about climate and energy for Nexus Media. Tweet him your questions at @deaton_jeremy.


How Scientists Are Preparing for the First-Ever ‘All-American’ Eclipse

 9700   Sharesubmit to reddit
A child (L) watches a partial solar eclipse with a woman at the Sydney Observatory on May 10, 2013. Star-gazers were treated to an annular solar eclipse in remote areas of Australia with the Moon crossing in front of the Sun and blotting out much of its light. The annular eclipse, a phenomenal which occurs when the Moon is so close to the Earth that is cannot completely cover the Sun when it passes between it, was seen across a band across northern Australia, while places such as Sydney saw a partial eclipse. AFP PHOTO / Saeed KHAN        (Photo credit should read SAEED KHAN/AFP/Getty Images)

A child (L) watches a partial solar eclipse with a woman at the Sydney Observatory on May 10, 2013. Star-gazers were treated to an annular solar eclipse in remote areas of Australia with the Moon crossing in front of the Sun and blotting out much of its light. The annular eclipse, a phenomenal which occurs when the Moon is so close to the Earth that is cannot completely cover the Sun when it passes between it, was seen across a band across northern Australia, while places such as Sydney saw a partial eclipse. 

Wear protective eye gear. Image credit: Saeed Khan/AFP/Getty Images

Preparations have already begun for what scientists are calling the Great American Eclipse of 2017. For the first time in American history, on August 21, 2017 the “path of totality” of a solar eclipse—that is, the path along which the Moon’s shadow transits—will run exclusively and entirely across U.S. soil in a 70-mile-wide line running from Oregon to South Carolina. Astronomers see it as an opportunity for new scientific observations and public engagement.

The last coast-to-coast solar eclipse over North America occurred in 1918. It followed a similar path as the 2017 eclipse, but because its path also brought it over Bermuda—then part of the British Empire, now a British territory—the United States couldn’t claim exclusivity. The 2017 eclipse won’t pass over Bermuda or any other territory. It is truly “all American.”

“The is the science Super Bowl,” said physicist Angela Des Jardins of the Montana Space Grant Consortium, who is leading an experiment in which 50 teams of students in 30 states will fly high-altitude balloons during the eclipse. Speaking on June 2 at the 47th annual conference of the Solar Physics Division of the American Astronomical Society in Boulder, Colorado, she described how the balloons will transmit to the Earth’s surface live video from the edge of space. Only once has an eclipse ever been observed from such a height—over Australia, in 2012—though coverage and images were then limited. “There’s never been live footage from the edge of space, and certainly not coverage across an entire continent,” she said. “It’s going to be awesome.”

Scientists anticipate filling other gaps in our understanding of the Sun as well. According to astronomer Shadia Habbal of the University of Hawaii, the eclipse will provide to scientists “unsurpassed views of the physics of the solar corona,” the halo of plasma surrounding the Sun that burns at 1,000,000 degrees Kelvin. During the eclipse, when the Sun is blotted out, instruments will be able to observe the intricate details of coronal structures, register the escape of material from the Sun, and capture plasma instabilities otherwise too faint to observe. “The solar corona is a rich astronomical laboratory we can observe in exquisite detail,” Habbal noted.


In 2014, the National Science Foundation mounted a study that it repeats every few years, in which it tests scientific literacy by asking Americans whether the Earth goes around the Sun, or the Sun goes around Earth. This question was settled in the 17th century, but apparently word hasn’t yet reached everyone: 26 percent of the American public thinks the Sun orbits the Earth. (“We hope to decrease this percentage by a little bit,” said Des Jardins.) If for no other reason, then, scientists hope the eclipse will spur people to look up and consider the solar system, how it works, and our place in it.

Jay Pasachoff, a professor at Williams College and the astronomy equivalent of Indiana Jones, travels the world to study eclipses. He has observed 63 of them. He wants the public to travel to the path of totality and be active participants in the 2017 eclipse. “We want to let you know in advance that you will be missing some really wonderful stuff if you are not in the zone of totality on August 21,” he said at the conference. (Previously, he described witnessing an eclipse outside of the path “like going up to the ticket booth of a baseball or football stadium but not going inside.”)

Michael Zeiler, GreatAmericanEclipse.com

To view the eclipse for its maximum duration, observers will have to travel to Hopkinsville, Kentucky. There, the eclipse can be experienced for a full two minutes, 40 seconds, during which time it will become as “dark as night in the middle of the day,” according to Pasachoff. The town has been preparing for the event for several years, building accommodations for a large influx of visitors, who are expected to number in the hundreds of thousands. A dress rehearsal of sorts was held in June 2012, when thousands flocked there to view the transit of Venus.

There are other vantages along the continental path, however, that offer shorter viewings but unique experiences. In Kentucky alone, one can forego Hopkinsville and instead choose Bowling Green, where the totality of the eclipse will be shorter but the Sun’s reddishchromosphere will be more visible. Elsewhere in the country, many are expected to climb mountains and not only witness the eclipse above, but also look down and watch as the moon’s shadow crawls eerily across the ground below.

All of this, of course, depends on clear skies. “You can’t outrun the clouds, so hope for good weather,” said Pasachoff. But even if unfavorable weather obscures the event, there will still be a creepy darkening of the skies to be enjoyed. Meanwhile, NASA’s website, and others, will be broadcasting the event. In the worst-case scenario, the wait won’t be terribly long for the next solar eclipse—just seven years. On April 8, 2024, the path of totality of a solar eclipse will cross from Mexico to Newfoundland, passing over much of the central-eastern U.S. in the process.