octobre 2017

Do gravitational anomalies prove we're not living in a computer simulation?


Researchers from Oxford and the Hebrew University believe they've found proof that the universe is too complex to exist inside a computer simulation(Credit: NASA/JPL-Caltech)



Is our entire universe just a computer simulation? It sounds like the premise for a sci-fi movie, but over the years the idea has been debated by scientists in earnest. But now theoretical physicists believe they've found proof that our universe is far too complex to be captured in any simulation. According to the researchers, the hypothesis is done in by gravitational anomalies, tiny "twists" in the fabric of spacetime.
For many, the concept that our civilization might exist inside a simulation goes back to the movie The Matrix, but it has actually been discussed in scientific circles as a legitimate possibility, including at the Isaac Asimov Memorial Debate at the American Museum of Natural History last year. Oxford philosopher Nick Bostrom proposed the idea in a 2003 paper, and the general crux of his argument is a bit of a numbers game.

The simulation argument

Essentially, Bostrom suggested that at the rate technology is advancing today, it's likely that future generations will have access to supercomputers beyond our imagining. And since we tend to use computers to run (relatively primitive) simulations with our current technology, those future humans (or another advanced species) would likely do the same, perhaps simulating their ancestors. And with all that extra processing power at their disposal, it follows that they would run many simulations simultaneously.
As a result, the amount of artificial universes would vastly outnumber the one "real" universe, so statistically it's far more likely that we live in one of these simulations. Astrophysicist Neil deGrasse Tyson puts our odds of living in a simulation at 50/50, while Elon Musk is far less optimistic, saying the chance is "one in billions" that we inhabit the one true world.

Artistic impression of a space-time twist in a crystal(Credit: Oxford University)




Taking the idea to the extreme, some even blame the election of Trump and the unprecedented Best Picture mixup at this year's Oscars on malicious higher beings deliberately messing with our virtual world, like bored Sim City players.

Gravitational anomalies

While it sounds like a fun thought experiment that's impossible to verify, researchers at Oxford and Hebrew University may now have proven that the universe is far too complex to simulate. The key is a quantum phenomenon known as the thermal Hall conductance – in other words, a gravitational anomaly.
These anomalies have been known to exist for decades, but are notoriously difficult to directly detect. Effectively representing twists in spacetime, they arise in physical systems where magnetic fields generate energy currents that cut across temperature gradients, particularly in cases where high magnetic fields and very low temperatures are involved.

Quantum simulations

Monte-Carlo simulations are used in a wide variety of fields, from finance to manufacturing to research, to assess the risks and likely outcomes of a given situation. They can process a huge range of factors at once and simulate the most extreme best- and worst-case scenarios, as well as all possibilities in between.
Quantum Monte-Carlo simulations are used to model quantum systems, but the Oxford and Hebrew scientists found that quantum systems containing gravitational anomalies are far too complex to ever be simulated. The quantities involved in the simulation will acquire a negative sign – essentially, there's an infinite number of possibilities, so the simulation can't possibly consider them all.
Pushing it further, the team says that as a simulated system gets more complex, the computational resources – processors, memory, etc – required to run it need to advance at the same rate. That rate might be linear, meaning that every time the number of particles simulated is doubled, the required resources also double. Or it could be exponential, meaning that those resources have to double every time a single new particle is added to the system.
That means that simulating just a few hundred electrons would require a computer with a memory made up of more atoms than the universe contains. Considering our universe contains 1080 particles – that's a 10 followed by 80 zeroes – the number of atoms needed to simulate that is incomprehensible and utterly unsolvable.
"Our work provides an intriguing link between two seemingly unrelated topics: gravitational anomalies and computational complexity," says Zohar Ringel, co-author of the paper. "It also shows that the thermal Hall conductance is a genuine quantum effect: one for which no local classical analogue exists."
The research was published in the journal Science Advances.
Source: Oxford University via Eurekalert  -  newatlas.com
                  

NASA Satellite Reveals Source of El Niño–Fueled Carbon Dioxide Spike




The OCO 2 mission serendipitously coincided with one of the strongest El Niños on record


Credit: Alexandros Maragos Getty Images



For every ton of carbon dioxide emitted by a power plant's smokestack or a car's exhaust pipe, some portion will stay in the Earth's atmosphere, raising global temperatures, while the rest is absorbed by the oceans or ecosystems on land.
But which parts of the ocean or biosphere act as net sources of carbon dioxide (CO2) and which take up more than they emit into the atmosphere, has been an open question. Figuring that out, as well as understanding what mechanisms govern that interplay and how they might change along with the climate, has been an open question and one that is key to understanding how global warming will progress.
The 2014 launch of the Orbiting Carbon Observatory-2 satellite was aimed at beginning to piece together some answers by monitoring the comings and goings of CO2 from the atmosphere with unprecedented precision and over large regions. [The Reality of Climate Change: 10 Myths Busted]

So far, the mission has done that and has turned up some surprises along the way. The mission serendipitously coincided with one of the strongest El Niños (an ocean and atmosphere cycle that impacts global weather) on record, allowing scientists to see how the carbon cycle responded and pinpoint exactly where the resulting record pulse of CO2 that entered the atmosphere came from. The satellite's instruments also unexpectedly proved capable of distinguishing the relatively small CO2 signatures of cities and even volcano plumes.
"We're very, very happy with these results," deputy project scientist Annmarie Eldering, of NASA's Jet Propulsion Laboratory, told Live Science.
But the findings, described in series of five papers in the Oct. 13 issue of the journal Science, are just the first steps at getting a better handle on the carbon cycle (how carbon flows through land and sea ecosystems and the atmosphere), as OCO-2 heads into an expected extended mission and other space-based projects are scheduled to follow in its wake.

LUCK AND SURPRISES

Carbon dioxide is added to and removed from the atmosphere by a range of competing processes. On land, for example, the photosynthesis of plants takes up CO2, while the decay of plant matter and wildfires release it back into the atmosphere. 
Scientists knew that El Niños were another factor that caused more CO2 to build up in the Earth's atmosphere, and from the 1997-1998 major El Niño, they had some suspicions on why that was. For one thing, El Niño tends to lead to drying in parts of the tropics, resulting in less photosynthesis and less uptake of carbon dioxide.

What project scientists couldn't know when the satellite rocketed into space on July 2, 2014, was that it would be perfectly poised to observe how one of the strongest El Niños in the books affected the carbon cycle.
"Sometimes you get really lucky," said Galen McKinley, a carbon cycle scientist at Columbia University's Lamont Doherty Earth Observatory.
These effects were in evidence during the 2015-2016 event, which caused the biggest year-over-year jump in global CO2 concentrations on record, according to the National Oceanic and Atmospheric Administration. But OCO-2 revealed, as is so often the case in science, that the picture was more complicated than previously thought. [CO2 Satellite: NASA's Orbiting Carbon Observatory-2 Mission in Photos]
The satellite's observations let project scientists piece together the sequence of events of the carbon cycle's response as the El Niño geared up and then reached its peak. They saw that at first there was a tiny dip in carbon dioxide levels over the tropical Pacific because of changes in the structure of the underlying ocean that meant waters gave off less CO2. But that slight decrease was quickly overtaken by the much larger response from terrestrial biomass as drought, heat and wildfires took a toll and caused less CO2 to be absorbed and more to be released. 
The ocean signal "was really a big surprise to us," said Abhishek Chatterjee, a scientist with University Space Research Association working at NASA's Goddard Spaceflight Center. The response had been inferred before, "but it was never observed to the degree that we could" with OCO-2, he said.

The team was able to take the analysis a step further by using OCO-2's capability to detect a signature of photosynthesis, which is a marker of the productivity of land plants. Together, the data showed that while the tropical areas of Southeast Asia, South America and Africa all added about the same amount of CO2 into the atmosphere, they did so for different reasons. In Southeast Asia, the hot, dry conditions brought on by El Niño made the region more vulnerable to fire, which releases CO2 into the atmosphere. In South America, dry conditions tamped down plant productivity, meaning the biosphere took up less carbon dioxide, so that the region became a net source of CO2. And in Africa, while rainfall was about normal, exceptional heat increased plant respiration, which caused more CO2 emissions.

MORE WORK TO DO

OCO-2 sensors were also surprisingly good at picking out much smaller CO2 signatures, such as the plume of Vanuatu's Yasur volcano and the contrast between Los Angeles' relatively higher CO2 levels compared with the surrounding suburban and rural areas. 
The satellite could also see how the difference between the urban core and rural areas declined in the summer because plants in the region took up some of the excess.
The ability of satellites to pinpoint these signatures has implications for a wide range of applications, including monitoring emissions to make sure cities and countries are complying with their pledges to reduce CO2. Satellite CO2 measurements could also provide earlier warnings of volcanic eruptions, said Florian Schwandner, also of NASA's JPL, as CO2 emissions from volcanoes increase before an eruption.
OCO-2 has completed its initial two-year planned mission and is expected to begin a three-year extended mission once NASA officials sign off on it, said Eldering, the deputy project scientist.

Scientists are also hoping that two other planned missions go as scheduled to build on OCO-2's work. One, called OCO-3, will use leftover spare parts from OCO-2 and would be mounted on the International Space Station to allow scientists to point at features of interest. That mission has been slated to be cut by the Trump administration, though it remains to be seen whether Congress will go along with that plan.
The other, called the Geostationary Carbon Cycle Observatory, would be able to measure CO2 over continuous areas, such as the U.S., something OCO-2 can't do.
"It's very exciting science, [but] there's a lot more work to do," McKinley said.

         SRC : scientificamerican.com

Astronomers Are Finally Mapping the “Dark Side” of the Milky Way

  Half of our home galaxy is terra incognita. That will soon change

Astronomers directly measured the distance to a star-forming region on the far side of
our Milky Way galaxy, past the galactic center. Further measurements could, at last, bring long-hidden regions of the Milky Way to light. Credit: Bill Saxton, NRAO/AUI/NSF; Robert Hurt, NASA


Think of the Milky Way—or search for pictures of it online—and you’ll see images of a standard spiral galaxy viewed face-on, a sprawling pinwheel of starlight and dust containing hundreds of billions of stars. These images, however, are mostly make-believe.


We know the Milky Way is a star-filled spiral galaxy in excess of 100,000 light-years wide, and we know our solar system drifts between two spiral arms at its outskirts, some 27,000 light-years from its center. But much beyond that, our knowledge fades. No space probe or telescope built by humans has ever escaped the Milky Way to turn back and take a portrait; because we are embedded in our galaxy’s disk, we can only see it as a bright band of stars across the sky. For astronomers trying to map it, the effort is a bit like learning the anatomy of a human body from the perspective of a single skin cell somewhere on a forearm. How many spiral arms does the Milky Way have, and how do those spiral arms branch and curl around the galaxy? How many stars does the Milky Way really contain? How much does it weigh? What does our cosmic home actually look like, viewed from another nearby galaxy? Ask an astronomer—and if he or she is being perfectly honest, you will learn that we do not fully know.

Among the biggest obstacles to our knowledge is the disk of the galaxy itself, particularly its center, which is thick with starlight-absorbing dust and rife with energetic astrophysical outbursts that can ruin delicate observations. This means we know very little about the other side of the galaxy. “Optically, it’s like trying to look through a velvet cloth—black as black can be,” says Thomas Dame, an astronomer at Harvard–Smithsonian Center for Astrophysics (CfA). “In terms of tracing and understanding the spiral structure, essentially half of the Milky Way is terra incognita.” Now, however, new record-breaking measurements are allowing astronomers to pierce the veil of the galactic center as never before, and to construct the best-ever maps of our galaxy’s structure.

Instead of using visible light, Dame and others map the Milky Way by looking for radio emissions from molecular gas clouds and massive, young stars, both of which typically reside in spiral arms. The challenge lies in measuring, in the absence of convenient intergalactic road signs or distance markers, how far off these objects are. Without knowing these distances, astronomers cannot precisely situate any given radio source within the galaxy to accurately reconstruct the Milky Way’s morphology. Since the 1950s astronomers have solved this problem using “kinematic distances,” calculations that treat objects in the Milky Way a bit like pieces of flotsam spiraling into a whirlpool; because things tend to move faster as they approach the center, measuring how fast an object is moving toward or away from us yields an estimate of its distance from the galactic center—and thus from our solar system. Kinematic distances have helped Dame and others discover previously unknown spiral arms and spiral-arm substructures on our solar system’s side of the Milky Way. But the technique breaks down for peering directly across the galaxy, where objects do not move toward or away from us at all but rather purely perpendicularly to our line of sight. To map the Milky Way’s hidden half requires a more direct method.

In a study published October 12 in Science, Dame and an international team of colleagues have demonstrated just that. Using the National Science Foundation’s Very Long Baseline Array (VLBA), an interlinked system of 10 radio telescopes stretching across Hawaii, North America and the Caribbean, the astronomers have directly measured the distance to an object called G007.47+00.05, a star-forming region located on the opposite side of the galaxy from our solar system. The measurement showed the region to be some 66,000 light-years away—nearly 40,000 light-years beyond the galactic center, and roughly double the distance of the previous record-holding direct measurement of distance in the Milky Way.

The team relied on a timeworn technique called parallax, which measures the apparent shift in an object’s celestial position when seen from opposing sides of the Earth’s orbit around the sun. You can see parallax on smaller scales simply by holding a finger in front of your face and winking one eye then the other. Your finger will seem to jump from side to side; calculating its distance from your face is as simple as measuring the angle of its apparent shift. The smaller the angle, the greater the distance. And the wider the distance between your two detectors, be they eyes or radio dishes, the more acute your measurement can be.

The VLBA’s parallax observations took place in 2014, when Earth was on one side of its orbit, and then six months later in 2015, when our planet was on the opposite side of the sun. This maximized the sensitivity of the technique, allowing it to measure the minuscule shift in the apparent position of the distant star-forming region. According to lead author Alberto Sanna, a postdoctoral researcher at the Max Planck Institute for Radio Astronomy in Germany, the VLBA’s measurement is “equivalent to seeing a baseball on the surface of the moon.” The feat, Sanna says, shows “we can measure the whole extent of our galaxy, to accurately number and map the Milky Way’s spiral arms and know their true shapes, so that we can learn what the Milky Way really looks like.”

“It really is excellent work—I believe this is the smallest parallax ever obtained, and it is certainly a milestone in modern observational astronomy,” says Mareki Honma, an astronomer at the National Astronomical Observatory of Japan. Honma led a separate team that independently measured the distance to G007.47+00.05 in 2016, finding a similar value. Those measurements, however, were not accurate enough to obtain parallax, and relied instead on tracking the star-forming region’s so-called “proper” across the plane of the sky. The similarity between the two teams’ results, Honma says, suggests proper motion alone can be a useful tool for determining distances to objects on the other side of the galaxy.

Already, the confirmed distance for this particular star-forming region is redrawing galactic maps. In 2011 Dame and colleagues used radio measurements to tentatively trace the path of one spiral arm, called Scutum–Centaurus. Their fragmentary measurements suggested this arm might wrap around almost the entirety of the Milky Way, but they lost its trail—and crucial evidence for its galaxy-encircling breadth—in the vicinity of the dark, roiling galactic center. This star-forming arm “runs right through one of the features we identified in 2011, and adds evidence that the Scutum–Centaurus arm is really a major structure in our galaxy,” Dame says. “In 2011 we wrote that we may never sort this out, because proving its distance through the galactic center would be so difficult—but we were being shortsighted, because here it is, six years later!”

The VLBA’s painstaking, Earth-orbit-spanning measurement occurred as part of a larger project, the Bar and Spiral Structure Legacy Survey (BeSSeL) led by Mark Reid, who like Dame is a radio astronomer at the CfA and a co-author on the Science study. Now in its concluding stages, BeSSeL used 3,500 hours on the VLBA to obtain more than 200 distance measurements for star-forming regions scattered throughout the Milky Way. Many of these readings are now tracing out new details in the galaxy’s filigree of spiral arms.

Which is a good start—but being in the Northern Hemisphere, the VLBA and BeSSeL cannot survey most of star-forming regions visible from the southern sky. And even if they could, parallax alone will not fill in the galactic map. Because each parallax measurement for far-distant star-forming regions on the other side of the galaxy is so difficult and time-consuming to obtain, astronomers widely agree they will chiefly serve as important calibration points to augment existing kinematic distance measurements. Further progress will come from a combination of parallax, proper motion and kinematic distance data via surveys using Southern Hemisphere–based radio telescopes as well as from space-based data from the European Space Agency’s Gaia satellite. The latter is using visible-light parallax measurements to pin down the precise positions for a billion of the Milky Way’s stars. Taken together, the resulting map will help astronomers pin down many still-unknown fundamental aspects of our galaxy such as how fast and uniformly it rotates. This will let them finally determine just how massive the Milky Way really is, potentially yielding new insights into our galaxy’s inventory of stars, dark matter and small satellites that lurk at its edges. All of this will help scientists understand how the Milky Way first came to be, and much that has happened to it since.

“How important is it, really, for us to be able to see clear across to the other side of our own galaxy?” asks Tom Bania, a radio astronomer at Boston University involved in some of the southern surveys. “It is the most important thing in all of astrophysics. It took humankind thousands of years to map the Earth accurately; a map of the galaxy will constrain about a dozen or so models of the structure and evolution of the Milky Way. To me, perhaps the ‘Holy Grail’ of astronomy is to provide a clear perspective of our relationship to the physical universe. The map of our galaxy is a part of that, and that map is still incomplete.”

Soon, that could change. Thanks to BeSSeL and its ilk, Reid notes, “in only a few more years we should have a map that shows us what the Milky Way really looks like.”


SRC : www.scientificamerican.com

Experimental Drug That Mutes Defective Genes Raises New Hopes


RNA interference systems would target genetic sources and shut down protein production


Computer illustration of cytotoxic T-lymphocytes attacking a cancer cell. Credit: Juan GaertnerGetty Images





The experimental drug has startling powers: It can turn down a mutant gene in a patient’s body, stopping the production of proteins that cause a terribly painful rare disease.

A crucial, late-stage clinical trial showed that the drug works—and that it’s safe. And now the biotech company behind it, Alnylam, is poised to bring this first-of-its-kind therapy to market.


The news has thrilled both patients and scientists, who have been working for decades on the technology to mute misbehaving genes, known as RNA interference, or RNAi. They’ve understood for two decades how the biology works. But it’s been a long, slow slog to figure out how to deliver RNAi therapies to the right cells safely and effectively. Alnylam alone has spent 15 years, and more than $1 billion, on the effort.

 So does the company’s recent success herald an explosion of new drugs that can shut down troublesome genes?

 Maybe.

 The RNAi delivery systems remain highly complex—and the most effective technologies are still protected by patents that make it difficult for startups to get into the field. Safety concerns persist with other RNAi drugs in development: Last year, for instance, Alnylam had to scrap revusiran, one of its most advanced drugs. Rather than alleviating it, the drug exacerbated pain in a rare nerve disease called transthyretin amyloidosis. And several patients died in the clinical trial, though it’s still not clear exactly why. Alnylam’s stock plummeted by half on that news.


The issue of safety has haunted the RNAi field for many years now: Although it’s possible to silence the impact of genes, early work has resulted in several off-target effects that create unforeseen toxicities. So the major focus of RNAi research in recent years has been drug delivery: how to safely deposit the RNAi load to the right tissues, minimizing the impact on other bodily functions. And that’s proven challenging. While it’s working, for now, in diseases of the liver, the field still needs to validate whether RNAi can work in other organs.

Alnylam is testing seven RNAi drugs in the clinic, for conditions ranging from hepatitis B to high cholesterol. A handful of other companies are also in the field, working on therapies that treat diseases of the central nervous system or enhance cancer immunotherapy. But we’re unlikely to see a flood of startups suddenly raising tens of millions to pursue RNAi.

“I don’t think you’ll see new companies popping up in RNA interference. The intellectual property around the field is too constraining,” said Doug Fambrough, CEO of RNAi competitor Dicerna Pharmaceuticals.

Still, the field does have enormous promise —and market potential.

Most drugs work by targeting ill-formed or malfunctioning proteins and trying to excise them from the body. RNAi, by contrast, goes after the genetic source of the faulty protein production and shuts that system down.

When it works, it can ease symptoms in patients with no other options.

Alnylam’s lead RNAi drug patisiran, aimed at treating a rare nerve disorder called familial amyloid polyneuropathy, is projected to ultimately exceed $1 billion in worldwide sales at its peak, which is expected in 2023. The drug, which is being developed in conjunction with Sanofi, should be up for review by the Food and Drug Administration in a couple months, and will be up for European regulatory approval next year.

Alnylam’s stock shot up so much on the news, its market capitalization now exceeds $11 billion. “Alnylam’s results came out with a bang, not a whimper,” said Alnylam CEO John Maraganore. “It highlights the fact that these medicines can really be transformative, turning off production of genetic disease.”


A BIG WIN IN WORMS PROVES TOUGH TO TRANSLATE TO HUMANS




The phenomenon of RNA interference was first observed in the 1990s in nematodes, and in 1998 scientists Andrew Fire and Craig Mello published a seminal paper in Nature that demonstrated these roundworm genes could be silenced. The duo won the Nobel Prize in 2006 for their work. By that point, several companies had already launched to try to develop new RNAi-based therapeutics—including Alnylam.

Expectations were high — and so was the pressure to create a groundbreaking medicine.

Then came the roadblocks. Scientists hit hurdle after hurdle. And many companies began to crumble: While gene silencing was simple in worms, mammals proved to be far more complex—and safe drug delivery emerged as a major problem. When RNAi therapies weren’t delivered to the right tissues, dangerous side effects showed up in humans that weren’t predicted through animal models. The drugs just weren’t working.

The most dogged RNAi companies—Alnylam among them—began to study better methods to deliver these therapeutics to the right tissues. And a few different techniques have since emerged to improve drug delivery, and, by extension, safety—such as Alnylam’s approach of binding the RNAi therapeutic to a lipid nanoparticle, or fat, to help it settle in the liver. Another method used broadly by RNAi companies is “GalNAc”—a sugar derivative that’s attached to RNAi drugs to help it safely work in the liver.

Both have shown preclinical promise, and the lipid nanoparticle approach was used with patisiran to great effect in Alnylam’s positive trial. So the initial hurdles of safe RNAi delivery finally seem surmountable.

“Nobody will put [RNAi therapies] back in the box, therapeutically,” said Phil Sharp, a scientific co-founder of Alnylam and winner of his own RNA-related Nobel Prize. “They are now alive and out there, and more and more people will see them as answers to their problems.”

Still, Sharp cautioned: “It took 15 years to get here, and the next 10 years will be exponentially more impactful. But we won’t see this matured as a pharmaceutical approach for decades.”

A HANDFUL OF BIOTECHS CHASE A REVOLUTION



The closest approximation to another RNAi success comes from Ionis Pharmaceuticals and Biogen, which last year received approval for Spinraza, a drug aimed at spinal muscular atrophy. Children with the disease don’t produce enough of a protein called SMN, and the drug works by amplifying the gene that produces the protein—allowing the body to create more of it. It is, in a sense, the opposite of gene silencing, but it’s another proof point that validates the general concept of creating drugs that mute or amplify defective genes.

It’s also proof of concept that a successful RNAi therapy can be quite lucrative.

Spinraza is priced at $750,000 for the first year of treatment and $375,000 for each subsequent year. (Alnylam has not yet indicated how it will price patisiran if it wins FDA approval.)

Meanwhile, other biotechs working on RNAi continue to churn along. Arbutus Therapeutics just landed a $116 million investment from Roivant Sciences to speed development of its RNAi tech as well as other therapeutic platforms in development. RXi Pharmaceuticals is pursuing RNAi therapeutics in a broad array of diseases, from warts to cancer. Arrowhead Pharmaceuticals and Wave Life Sciences have a number of preclinical programs in play, though none have advanced nearly as far as Alnylam.

Dicerna Pharmaceuticals, meanwhile, is preparing to enter the clinic with an RNAi drug that treats primary hyperoxaluria—a rare genetic disease that causes the overproduction and buildup of substances called oxalates in the urine. The company’s therapeutic aims to turn off the enzyme that creates all the excess oxalate.

Alynlam’s success boosted the stock of many of these biotechs; Dicerna’s share price jumped 21 percent immediately after Alynlam released its data on Sept. 20 and has increased steadily ever since.

A SURGE OF INTEREST IN BACK-TO-BASICS RESEARCH



The bull case for RNAi draws from biotech history. As many in the industry point out, monoclonal antibodies have become a hugely lucrative and important therapeutic class—but their development was just as fraught, with just as many hurdles, as RNAi. Scientists finally solved the biggest problems, and by 2024, the market is expected to top $130 billion globally.

RNAi could hit great heights, too, if it can be made to work outside the liver.

“If we can expand the role of RNAi to other organ systems beyond the liver, the likelihood that RNAi could overtake antibodies in terms of importance for diseases of man, animals, and even plants, is certainly there,” said Dr. Geert Cauwenbergh, president and CEO of RXi Pharmaceuticals. “It’ll just take work, like anything else.”

Gene Yeo, an RNA researcher at University of California, San Diego, thinks he can break that liver barrier. He’s building on the ideas of RNAi to form his own startup, Locana Therapeutics. The company aims to use CRISPR gene-editing to craft RNA therapeutics that can be delivered into the central nervous system—clearly a daunting task for most drug makers.

By editing the RNA, the company hopes to find therapies for neurodegenerative diseases like Huntington’s and ALS

“I think the lessons we extract from Alnylam’s successes have a little more to do with the idea of delivery,” Yeo said. “The field was mired by the delivery problem—that is, getting the compounds to the right tissues, and right cell types—and get a durable response. But now we see that we can do that.”

Madhu Lal-Nag, a researcher at the National Institutes of Health who coordinates RNAi research, said she’s starting to see a surge of interest in the field among academics who want to unravel more of the basic science, especially as it becomes clear that other hot fields of research, such as CRISPR gene-editing, face their own series of hurdles.

“Everyone jumped on the CRISPR-Cas9 bandwagon, but there are a host of things that we don’t know about genome-editing that we’re now beginning to see,” Lal-Nag said. “I think that’s been responsible for people going back and taking a look at RNAi. Better the devil you know.”


SRC : www.scientificamerican.com

Yellowstone Supervolcano Could Erupt Quickly, Scientists Say



The supervolcano lurking beneath Yellowstone National Park might be getting ready to explode, an eruption that could be devastating to life on Earth.
Scientists reported during a volcanology conference that it could take as little as a human lifetime for a dormant volcano to wake up and prepare itself for a massive eruption, the New York Times says. For Yellowstone, that type of supereruption last happened more than 600,000 years ago, after magma filled the empty chambers below the Earth’s surface some decades before it blew.
It was previously believed that this build up took thousands of years, but the new research suggests the timeframe was much tighter.
The New York Times reports that the Yellowstone supervolcano is capable of unleashing enough ash and rock — hundreds of cubic miles at one time — into an eruption radius large enough to cover most of the country in a fog and affect the environment of the entire planet.

But it’s not the only supervolcano there is. Campi Flegrei in Italy is another example of one of these natural monsters that could be devastating if it were to erupt. It is just west of Naples, close to the legendary Mount Vesuvius that destroyed the ancient city Pompeii with an eruption in the first century. Experts studying the Italian supervolcano note that Campi Flegrei, which last blew in 1538, has experienced earthquakes and ground uplifting that has made room for magma to build up beneath it.
Supervolcanoes earn that title if they have let loose an eruption of a magnitude 8 on the Volcano Explosivity Index — a scale that runs from 0 to 8. The top level indicates that an eruption released 250 cubic miles of magma.
Predicting volcanic eruptions is difficult, however, and volcanologists are trying to crack the code. In the case of this new research, the scientists found during an analysis of material that after magma filled up the area beneath Yellowstone all those thousands of years ago, temperatures and other conditions changed quickly, over the course of just decades, leading to an eruption.



“It’s shocking how little time is required to take a volcanic system from being quiet and sitting there to the edge of an eruption,” scientist Hannah Shamloo told the New York Times.
In recent years, Yellowstone has experienced ground uplifting, a sign of activity that could possibly warn of an eruption because of the magma buildup that takes place beneath the swelling surface.
According to the U.S. Geological Survey, the last eruption at Yellowstone was 640,000 years ago. There was another eruption 660,000 years before that.
Yellowstone National Park doesn’t contain a classic image of a volcano, with a mountain looming high in the sky, but it is still home to a massive volcano structure. Much of the park is within the Yellowstone caldera, the crater created when the magma from the supervolcano erupted and the Earth’s crust collapsed into the empty space it left behind. Some of its biggest attractions, including the geyser Old Faithful and the Grand Prismatic Spring are signs of the site’s volcanic activity.

                  SRC :  http://www.ibtimes.com

Google to give $1 billion to nonprofits and help Americans get jobs in the new economy





Google will invest $1 billion over the next five years in nonprofit organizations helping people adjust to the changing nature of work, the largest philanthropic pledge to date from the Internet giant.




The announcement of the national digital skills initiative, made by Google CEO Sundar Pichai in Pittsburgh, Pa. Thursday, is a tacit acknowledgment from one of the world's most valuable companies that it bears some responsibility for rapid advances in technology that are radically reshaping industries and eliminating jobs in the U.S. and around the world.



Pichai's pitstop in an old industrial hub that has reinvented itself as a technology and robotics center is the first on a "Grow with Google Tour." The tour that will crisscross the country will work with libraries and community organizations to provide career advice and training. It heads next to Indianapolis in November. "The nature of work is fundamentally changing. And that is shifting the link between education, training and opportunity," Pichai said in prepared remarks at Google's offices in Pittsburgh. "One-third of jobs in 2020 will require skills that aren't common today. It's a big problem."



Google will make grants in its three core areas: education,economic opportunity and inclusion. Already in the last few months, it has handed out $100 million of the $1 billion to nonprofits, according to Pichai. 

The largest single grant—$10 million, the largest Google's ever made—is going to Goodwill, which is creating the Goodwill Digital Career Accelerator. Over the next three years Goodwill, a major player in workforce development, aims to provide 1 million people with access to digital skills and career opportunities. 
Pichai says 1,000 Google employees will be available for career coaching.


In all, Google employees will donate 1 million volunteer hours to assist organizations like Goodwill trying to close the gap between the education and skills of the American workforce and the new demands of the 21st century workplace, Pichai said.

 The announcements which drew praise from state and local politicians including Pennsylvania governor Tom Wolf come as Google scrambles to respond to revelations that accounts linked to the Russian government used its advertising system to interfere with the presidential election. 

Google is embroiled in a growing number of other controversies, from a Labor Department investigation and a lawsuit by former employees allege systemic pay discrimination to the proliferation of misinformation in search results and extremist content on YouTube. As the controversies have multiplied, so too have calls for Washington to regulate Google because of its massive scale and global reach.


"This isn't the first time we've seen massive, market-creating and labor market-disrupting companies try to address growing public pressure and possible regulatory limits in this way. But it often has been individual corporate titans who've gotten into philanthropy—Andrew Carnegie, John D. Rockefeller—as a way to rehabilitate their own images, tarnished by anxiety about the size of their companies and treatment of workers," said Margaret O'Mara, a history professor at the University of Washington. "What's interesting here is what this signals about how Google's future business ambitions. It is betting that its next era will be one not of search and apps but of devices and labor market interventions."


Google's not alone fending off critics. A recent headline in tech news outlet TechCrunch read: "Dear Silicon Valley: America's fallen out of love with you." 

The tech industry, once a shiny symbol of American innovation and pride, has found itself on the defensive after the election of Donald Trump, which telegraphed the deepening disillusionment of everyday Americans who have watched the gains of the economic recovery pass them by. 

While whole communities in the nation's heartland have fallen into economic decline, the tech industry, clustered in vibrant coastal hubs like San Francisco and New York, has grown wealthy off new developments that are disrupting how Americans live and work. 

The pace of that innovation is quickening. For years tech companies could not deliver on promises of hyper-intelligent machines capable of performing human tasks. Now the technology is catching up to the aspirations. 

In recent years, Google and other companies have made long strides, from self-driving cars that whisk you to your destination to digital assistants who answer your questions. This new wave of automation that aids consumers in their everyday lives has a dark side: It's killing off traditional jobs and stranding workers, still struggling after the recession, who are unprepared for the shift. 

Google, says O'Mara, will have "undeniably disruptive impacts on the jobs people do and the skills they need for them.

" In the 1960s when computer-aided automation worried the nation, presidential and congressional commissions and government agencies tackled the challenge. 

"Now it's the private sector. And even though $1 billion sounds like a lot, it is a small number compared to government education programs or, for that matter, the balance sheets of large tech companies," O'Mara said. 

When Pichai came to the United States from his native India 24 years ago, it was the first time he had been on a plane. Pittsburgh was the first city he saw. Though Pittsburgh was moored to its early 20th century roots as a steel town, Carnegie Mellon University was already propelling the city into the future. 

"As a new arrival, I was homesick but struck by something new: the sense of optimism," he said. "I remain a technology optimist.

" Pichai envisions that transformation for Pittsburg as a blueprint for the country to make the transition to a new industrial era. On Thursday, Pichai detailed other programs Google is undertaking. 

 Grow with Google is a free online program to help Americans secure the skills they need to get a job or grow their business. Job seekers, business owners and teachers can learn the basics of working with tech, from spreadsheets to email, get training and certificates through google.com/grow. Google says it has rolled it out to 27,000 middle and high school students and now plans to expand it to community colleges and vocational programs. 

- In January, Google will launch an IT certificate program developed with online education provider Coursera that includes hands-on labs to prepare people for jobs in eight to 12 months and then connects graduates with potential employers. Google will sponsor 2,600 full scholarships through nonprofit organizations. 

- Working with Udacity, Google is creating the Google Developer Scholarship Challenge. The top 10% of applicants who enroll in Google developer courses will receive scholarships. 

- Google will give away 20,000 vouchers to get G Suite certification. 

"We don't have all the answers. The people closest to the problem are usually the people closest to the solution," Pichai said. "We want to help them reach it sooner."


                                      SRC : phys.org

There are only 15 possible pentagonal tiles, research proves



The 15 types of pentagonal tiles and their 4 specific types © Michael Rao, Laboratoire d'informatique du parallélisme . Credit: CNRS/Inria/ENS Lyon/Université Claude Bernard Lyon



Tiling the plane with a single pattern is a mathematical problem that has interested humans since Antiquity, notably for the aesthetic quality of tiles in mosaics or tiling. One of the unresolved problems in this field that has been puzzling the scientific community since 1918 has now been definitively resolved thanks to Michaël Rao of the Laboratoire d'informatique du parallélisme (CNRS/Inria/ENS de Lyon/Université Claude Bernard Lyon 1). Using computing tools he was able to demonstrate that there are only 15 five-sided patterns that can tile the plane. The research is now available on the Arxiv website.




There are a number of solutions for covering a floor with a single form, such as triangles, squares, rectangles, hexagons, etc. The exhaustive search for all of the convex forms that can tile the plane—a form with angles smaller than 180° that can cover an entire wall without overlapping—was initiated by Karl Reinhardt during his thesis in 1918. He showed that all triangles and quadrilaterals can tile the plane, but that there were only 3 types of hexagons that could do so, and that a polygon with seven sides or more could not do so. Only the question of pentagons remained open.
15 types of pentagons were discovered from 1918 to 2015 as part of singular research: initiated by Reinhardt in 1918, it went through a number of twists and turns, such as new discoveries by amateur mathematicians, up to the mediatized announcement in 2015 of a new 15th  30 years after the 14th. Yet the  was still unable to determine whether there were other forms of pentagons that could tile the plane.
Michaël Rao, a CNRS researcher at the Laboratoire d'informatique du parallélisme (CNRS/Inria/ENS Lyon/Université Claude Bernard Lyon 1), has now definitively shown that there is only a finite series of families of pentagons to be taken into account. Rao used a software program to generate all of the possibilities, and showed that 371 families of pentagons could potentially tile the plane. He then tested each of these families using another program, and demonstrated that only 19 types of pentagons met the conditions for angles and side lengths required to tile the plane. Among these 19 types, 15 corresponded to already known types, and the four others proved to be particular cases of these 15 types. Consequently, only 15 types of tiles can tile the plane.
Rao was able to settle a century-old problem with his methodology, and to open new perspectives. All of these convex tiles can tile the  periodically (that is, the tiles repeat infinitely). Yet it is not yet known whether there is a tile that allows for non-periodic tiling. Fortunately most of these techniques can also be used for non-convex polygons, and could thus serve as a basis for resolving another problem in the field of tiling, better known as the "Einstein Problem" (from the German "ein stein").


SRC ; phys.org