Universe Today
Can Entangled Particles Communicate Faster than Light?
Entanglement is perhaps one of the most confusing aspects of quantum mechanics. On its surface, entanglement allows particles to communicate over vast distances instantly, apparently violating the speed of light. But while entangled particles are connected, they don’t necessarily share information between them.
In quantum mechanics, a particle isn’t really a particle. Instead of being a hard, solid, precise point, a particle is really a cloud of fuzzy probabilities, with those probabilities describing where we might find the particle when we go to actually look for it. But until we actually perform a measurement, we can’t exactly know everything we’d like to know about the particle.
These fuzzy probabilities are known as quantum states. In certain circumstances, we can connect two particles in a quantum way, so that a single mathematical equation describes both sets of probabilities simultaneously. When this happens, we say that the particles are entangled.
When particles share a quantum state, then measuring the properties of one can grant us automatic knowledge of the state of the other. For example, let’s look at the case of quantum spin, a property of subatomic particles. For particles like electrons, the spin can be in one of two states, either up or down. Once we entangle two electrons, their spins are correlated. We can prepare the entanglement in a certain way so that the spins are always opposite of each other.
If we measure the first particle, we might randomly find the spin pointing up. What does this tell us about the second particle? Since we carefully arranged our entangled quantum state, we now know with 100% absolute certainty that the second particle must be pointing down. Its quantum state was entangled with the first particle, and as soon as one revelation is made, both revelations are made.
But what if the second particle was on the other side of the room? Or across the galaxy? According to quantum theory, as soon as one “choice” is made, the partner particle instantly “knows” what spin to be. It appears that communication can be achieved faster than light.
The resolution to this apparent paradox comes from scrutinizing what is happening when – and more importantly, who knows what when.
Let’s say I’m the one making the measurement of particle A, while you are the one responsible for particle B. Once I make my measurement, I know for sure what spin your particle should have. But you don’t! You only get to know once you make your own measurement, or after I tell you. But in either case nothing is transmitted faster than light. Either you make your own local measurement, or you wait for my signal.
While the two particles are connected, nobody gets to know anything in advance. I know what your particle is doing, but I only get to inform you at speed slower than light – or you just figure it out for yourself.
So while the process of entanglement happens instantaneously, the revelation of it does not. We have to use good old-fashioned no-faster-than-light communication methods to piece together the correlations that quantum entanglement demand.
The post Can Entangled Particles Communicate Faster than Light? appeared first on Universe Today.
IceCube Just Spent 10 Years Searching for Dark Matter
Neutrinos are tricky little blighters that are hard to observe. The IceCube Neutrino Observatory in Antarctica was built to detect neutrinos from space. It is one of the most sensitive instruments built with the hope it might help uncover evidence for dark matter. Any dark matter trapped inside Earth, would release neutrinos that IceCube could detect. To date, and with 10 years of searching, it seems no excess neutrinos coming from Earth have been found!
Neutrinos are subatomic particles which are light and carry no electrical charge. Certain events, such as supernovae and solar events generate vast quantities of neutrinos. By now, the universe will be teeming with neutrinos with trillions of them passing through every person every second. The challenge though is that neutrinos rarely interact with matter so observing and detecting them is difficult. Like other sub-atomic particles, there are different types of neutrino; electron neutrinos, muon neutrinos and tau neutrinos, with each associated with a corresponding lepton (an elementary particle with half integer spin.) Studying neutrinos of all types is key to helping understand fundamental physical processes across the cosmos.
Chinese researchers are working on a new neutrino observatory called TRIDENT. They built an underwater simulator to develop their plan. Image Credit: TRIDENTThe IceCube Neutrino Observatory began capturing data in 2005 but it wasn’t until 2011 that it began full operations. It consists of over 5,000 football-sized detectors arranged within a cubic kilometre of ice deep underground. Arranged in this fashion, the detectors are designed to capture the faint flashes of Cherenkov radiation released when neutrinos interact with the ice. The location near the South Pole was chosen because the ice acts as a natural barrier against background radiation from Earth.
A view of the IceCube Lab with a starry night sky showing the Milky Way and green auroras. Photo By: Yuya Makino, IceCube/NSFUsing data from the IceCube Observatory, a team of researchers led by R. Abbasi from the Loyola University Chicago have been probing the nature of dark matter. This strange and invisible component of the universe is thought to make up 27% of the mass-energy content of the universe. Unfortunately, dark matter doesn’t emit, absorb or reflect light making it undetectable by conventional means. One train of thought is that dark matter is made up of Weakly Interacting Massive Particles (WIMPs.) They can be captured by objects like the Sun leading to their annihilation and transition into neutrinos. It’s these, that the team have been hunting for.
The paper published by the team articulates their search for muon neutrinos from the centre of the Earth within the 10 years of data captured by IceCube. The team searched chiefly for WIMPs within the mass range of 10GeV to 10TeV but due to the complexity and position of the source (the centre of the Earth,) the team relied upon running Monte Carlo simulations. The name is taken from casino’s in Monaco and involves running many random simulations. This technique is used where exact calculations are unable to compute the answer and so the simulations are based on the concept that randomness can be used to solve problems.
After running many simulations of this sort, the team found no excess neutrino flux over the background levels from Earth. They conclude however that whilst no evidence has been found yet, that an upgrade to the IceCube Observatory may yield more promising results as they can probe lower neutrino mass events and hopefully one day, solve the mystery of the nature of dark matter.
Source : Search for dark matter from the centre of the Earth with ten years of IceCube data
The post IceCube Just Spent 10 Years Searching for Dark Matter appeared first on Universe Today.
Star Devouring Black Hole Spotted by Astronomers
A team of astronomers have detected a surprisingly fast and bright burst of energy from a galaxy 500 million light years away. The burst of radiation peaked in brightness just after 4 day and then faded quickly. The team identified the burst, which was using the Catalina Real-Time Transient Survey with supporting observations from the Gran Telescopio Canarias, as the result of a small black hole consuming a star. The discovery provides an exciting insight into stellar evolution and a rare cosmic phenomenon.
Black holes are stellar corpses where the gravity is so intense that nothing, not even light can escape. They form when massive stars collapse under their own gravity at the end of their life forming an infinitely small point known as a singularity. The region of space around the singularity is bounded by the event horizon, the point beyond which, nothing can escape. Despite the challenges of observing them, they can be detected by observing the effects of their gravity on nearby objects like gas clouds. There are still many mysteries surrounding black holes so they remain an intense area of study.
3D rendering of a rapidly spinning black hole’s accretion disk and a resulting black hole-powered jet. Credit: Ore Gottlieb et al. (2024)A team of astronomers led by Claudia Gutiérrez from the Institute of Space Sciences and the Institute of Space Studies of Catalina used data from the Catalina Real-Time Transient Survey (CRTS) to explore transient events. The CRTS was launched in 2004 and is a wide field survey that looks for variable objects like supernova and asteroids. It uses a network of telescopes based in Arizona to scan large areas of sky to detect short-lived events. It has been of great use providing insights into the life cycle of stars and the behaviour of distant galaxies.
The 60 inch Mt. Lemmon telescope is one of three telescopes used in the Catalina Sky Survey. Image: Catalina Sky Survey, University of Arizona.The team detected the bright outburst in a galaxy located 500 million light years away and published their results in the Astrophysical Journal. The event took place in a tiny galaxy about 400 times less massive than the Milky Way. The burst was identified as CSS161010, it reached maximum brightness in only 4 days and 2.5 days later had it’s brightness reduced by half. Subsequent work revealed that previous detection had been picked up by the All-Sky Automated Survey for SuperNovae. Thankfully the detection was early enough to allow follow up observations by other ground based telescopes. Typically these types of events are difficult to study due to their rapid evolution.
Only a handful of events like CSS161010 have been detected in recent years but until now their nature was a mystery. The team led by Gutiérrez have analysed the spectral properties and found hydrogen lines revealing material travelling at speeds up to 10% of the speed of light. The changes observed in the hydrogen emission lines is similar to that seen in active galactic nuclei where supermassive black holes exist. The observation suggests it relates to a black hole, although not a massive one.
The brightness of the object reduced 900 times over the following two months. Further spectral analysis at this time still revealed blue shifted hydrogen lines indicating high speed gas outflows. This was not something usually seen from supernova events suggesting a different origin. The team believe that the event is the result of a small black hole swallowing a star.
Source : Astronomers detected a burst caused by a black hole swallowing a star
The post Star Devouring Black Hole Spotted by Astronomers appeared first on Universe Today.
What Makes Brown Dwarfs So Weird?
Meet the brown dwarf: bigger than a planet, and smaller than a star. A category of its own, it’s one of the strangest objects in the universe.
Brown dwarfs typically are defined to have masses anywhere from 12 times the mass of Jupiter right up to the lower limit for a star. And despite their names, they are not actually brown. The largest and youngest ones are quite hot, giving off a steady glow of radiation. In fact, the largest brown dwarfs are almost indistinguishable from red dwarfs, the smallest of the stars. But the smallest, oldest, and coldest ones are so dim they can only be detected with our most sensitive infrared telescopes.
Unlike stars, brown dwarfs don’t generate their own energy through nuclear fusion, at least not for very long. Instead they emit radiation from the leftover heat of their own formation. As that heat escapes, the brown dwarf continues to dim, sliding from fiery red to mottled magenta to invisible infrared. The greater the mass at its birth, the more heat it can trap and the longer it can mimic a proper star, but the ultimate end fate is the same for every single brown dwarf, regardless of its pedigree.
At first it may seem like brown dwarfs are just extra-large planets, but they get to do something that planets don’t. While brown dwarfs can’t fuse hydrogen in their cores – that takes something like 80 Jupiter masses to accomplish – they can briefly partake in another kind of fusion reaction.
In the cooler heart of a brown dwarf, deuterium, which is a single proton and neutron, can convert into Helium-3, and in the process release energy. This process doesn’t last long; in only a few million years even the largest brown dwarfs use up all their available deuterium, and from there they will just cool off.
As for their size, they tend not to be much larger in diameter than a typical gas giant like Jupiter. That’s because unlike a star, there isn’t an additional source of energy, and thereby pressure, to prop themselves up. Instead, all that’s left is the exotic quantum force known as degeneracy pressure, which means that you can only squeeze so many particles into so small a volume. In this case, brown dwarfs are very close to the limit for degeneracy pressure to maintain their size given their mass.
This means that despite outweighing Jupiter, they won’t appear much larger. And unlike Jupiter, they are briefly capable of nuclear fusion. After that, however, they spend the rest of their lives wandering the galaxy, slowly chilling out.
The post What Makes Brown Dwarfs So Weird? appeared first on Universe Today.
Archaeology On Mars: Preserving Artifacts of Our Expansion Into the Solar System
In 1971, the Soviet Mars 3 lander became the first spacecraft to land on Mars, though it only lasted a couple of minutes before failing. More than 50 years later, it’s still there at Terra Sirenum. The HiRISE camera NASA’s Mars Reconnaissance Orbiter may have imaged some of its hardware, inadvertently taking part in what could be an effort to document our Martian artifacts.
Is it time to start cataloguing and even preserving these artifacts so we can preserve our history?
Some anthropologists think so.
Justin Holcomb is an assistant research professor of anthropology at the University of Kansas. He and his colleagues argue that it’s time to take Martian archaeology seriously, and the sooner we do, the better and more thorough the results will be. Their research commentary, “The emerging archaeological record of Mars,” was recently published in Nature Astronomy.
Artifacts of the human effort to explore the planet are littered on its surface. According to Holcomb, these artifacts and our effort to reach Mars are connected to the original human dispersal from Africa.
“Our main argument is that Homo sapiens are currently undergoing a dispersal, which first started out of Africa, reached other continents and has now begun in off-world environments,” said lead author Holcomb. “We’ve started peopling the solar system. And just like we use artifacts and features to track our movement, evolution and history on Earth, we can do that in outer space by following probes, satellites, landers and various materials left behind. There’s a material footprint to this dispersal.”
Tracks from Opportunity stretch across this vista taken by the rover on Sol 3,781 in September 2014. This is from only ten years ago, but those missions already seem historical. Credit: NASA/JPL-Caltech/Cornell Univ./Arizona State Univ.It’s tempting to call debris from failed missions wreckage or even space junk like we do the debris that orbits Earth. But things like spent parachutes and heat shields are more than just wreckage. They’re artifacts the same way other cast-offs are artifacts. In fact, what archaeologists often do in the field is sift through trash. “Trash is a proxy for human behaviour,” said one anthropologist.
In any case, one person’s trash can be another person’s historical artifact.
Spacecraft that land on Mars have to eject equipment – like this protective shell from Perseverance and imaged by Ingenuity– on their way to the Martian surface. Spacecraft can’t reach the surface without protection. As time passes, trash and debris like this become important artifacts. NASA/JPL-Caltech“These are the first material records of our presence, and that’s important to us,” Holcomb said. “I’ve seen a lot of scientists referring to this material as space trash, galactic litter. Our argument is that it’s not trash; it’s actually really important. It’s critical to shift that narrative towards heritage because the solution to trash is removal, but the solution to heritage is preservation. There’s a big difference.”
14 missions to Mars have left their mark on the red planet in the form of artifacts. According to the authors, this is the beginning of the planet’s archaeological record. “Archaeological sites on the Red Planet include landing and crash sites, which are associated with artifacts including probes, landers, rovers and a variety of debris discarded during landing, such as netting, parachutes, pieces of the aluminum wheels (for example, from the Curiosity rover), and thermal protection blankets and shielding,” they write.
This figure from the research shows fourteen missions to Mars, along with key sites and examples of artifacts. MER A and B are NASA’s Spirit and Opportunity. a) Basemap generated from data derived from the Mars Orbiter Laser Altimeter (MOLA) and the High-Resolution Stereo Camera (HRSC)12. b) Viking-1lander (NASA/JPL). c) Trackways created by NASA’s Perseverance rover (NASA/JPL-Caltech/Arizona State University). d) Dacron netting used in thermal blankets, photographed by NASA’s Perseverance rover using its onboard Front Left Hazard Avoidance Camera A (NASA/JPL-Caltech/Arizona State University).
e) China’s Tianwen-1 lander and Zhurong rover in southern Utopia Planitia photographed by HiRISE (NASA/JPL-Caltech/University of Arizona). f) The ExoMars Schiaparelli Lander crash site in Meridiani Planum (NASA/JPL-Caltech/University of Arizona). g) Illustration of the Soviet Mars Program’s Mars 3
space probe (NASA). h) NASA’s Phoenix lander with digital video disc (DVD) in the foreground (NASA/JPL-Caltech).
Other features include rover tracks and rover drilling and sampling sites.
Curiosity captured this self-portrait at the ‘Windjana’ Drilling Site in 2014. The right panel shows its work. Image Credit: NASA/JPL-Caltech/MSSSWe’re already partway to taking our abandoned artifacts seriously. The United Nations keeps a list of objects launched into space called the Register of Objects Launched into Outer Space. It’s a way of identifying which countries are liable and responsible for objects in space (but not which private billionaires.) The Register was first implemented in 1976, and it says that about 88% of crewed spacecraft, elements of the ISS, satellites, probes, and landers launched into space are registered.
UNESCO also keeps a register of heritage sites, including archaeological and natural sites. The same could be done for Mars.
This UNESCO list of heritage sites shows both natural and cultural heritage sites, including ones that are considered to be in danger. Click the image to visit the site and explore the map. Image Credit: UNESCOThere’s already one attempt to start documenting and mapping sites on Mars. The Perseverance Rover team is documenting all of the debris they encounter to make sure it can’t contaminate sampling sites. There are also concerns that debris could pose a hazard to future missions.
According to one researcher, there is over 1700 kg (16,000) pounds of debris on Mars, not including working spacecraft. While much of it is just scraps being blown around by the wind and broken into smaller pieces, there are also larger pieces of debris and nine intact yet inoperative spacecraft.
So far, there have been only piecemeal attempts to document these Martian artifacts.
“Despite efforts from the USA’s Perseverance team, there exists no systematic strategy for documenting, mapping and keeping track of all heritage on Mars,” the authors write. “We anticipate that cultural
resource management will become a key objective during planetary exploration, including systematic surveying, mapping, documentation, and, if necessary, excavation and curation, especially as we expand
our material footprint across the Solar System.”
Holcomb and his co-authors say we must understand that our spacecraft debris is the archaeological record of our attempt to explore not just Mars but the entire Solar System. Our effort to understand Mars is also part of our effort to understand our own planet and how humanity arose. “Any future accidental destruction of this record would be permanent,” they point out.
The authors say there’s a crucial need to preserve things like Neil Armstrong’s first footsteps on the Moon, the first impact on the lunar surface by the USSR’s Luna 2, and even the USSR’s Venera 7 mission, the first spacecraft to land on another planet. This is our shared heritage as human beings.
A bootprint in the lunar regolith, taken during Apollo 11 in 1969. Credit: NASA.“These examples are extraordinary firsts for humankind,” Holcomb and his co-authors write. “As we move forward during the next era of human exploration, we hope that planetary scientists, archaeologists and geologists can work together to ensure sustainable and ethical human colonization that protects
cultural resources in tandem with future space exploration.”
There are many historical examples of humans getting this type of thing wrong, particularly during European colonization of other parts of the world. Since we’re still at (we hope) the beginning of our exploration of the Solar System, we have an opportunity to get it right from the start. It will take a lot of work and many discussions to determine what this preservation and future exploration can look like.
“Those discussions could begin by considering and acknowledging the emerging archaeological record on Mars,” the authors conclude.
The post Archaeology On Mars: Preserving Artifacts of Our Expansion Into the Solar System appeared first on Universe Today.
Building the Black Hole Family Tree
In 2019, astronomers observed an unusual gravitational chirp. Known as GW190521, it was the last scream of gravitational waves as a black hole of 66 solar masses merged with a black hole of 85 solar masses to become a 142 solar mass black hole. The data were consistent with all the other black hole mergers we’ve observed. There was just one problem: an 85 solar mass black hole shouldn’t exist.
All the black hole mergers we’ve observed involve stellar mass black holes. These form when a massive star explodes as a supernova and its core collapses to become a black hole. An old star needs to be at least ten times the mass of the Sun to become a supernova, which can create a black hole of about 3 solar masses. Larger stars can create larger black holes, up to a point.
The first generation of stars in the cosmos were likely hundreds of solar masses. For a star above 150 solar masses or so, the resulting supernova would be so powerful that its core would undergo what is known as pair-instability. Gamma rays produced in the core would be so intense they decay into an electron-positron pair. The high-energy leptons would then rip apart the core before gravity could collapse it. To overcome the pair-instability, a progenitor star would need a mass of 300 Suns or more. This means that the mass range of stellar black holes has a “pair-instability gap.” Black holes from 3 solar masses to about 65 solar masses would form from regular supernovae, and black holes above 130 solar masses could form from stellar collapse, but black holes between 65-130 solar masses shouldn’t exist.
For GW190521, the 66 solar mass black hole is close enough to the limit that it likely formed from a single star. The 85 solar mass black hole, on the other hand, is smack-dab in the middle of the forbidden range. Some astronomers have argued that the larger black hole might have formed from a hypothetical boson star known as a Proca star, but if that’s true, then GW190521 is the only evidence that Proca stars exist. More likely, the 85 solar mass black hole formed from the merger of two smaller black holes, making GW190521 a staged merger. The difficulty with that idea is that black hole mergers are often asymmetrical, in a way that the resulting black hole is kicked out of its region of origin. Multiple black hole mergers would only occur under certain circumstances, which is where a new study in The Astrophysical Journal comes in.
The authors looked at how the mass, spin, and motion of a merging black hole pair determine the mass, spin, and recoil velocity of the resulting black hole. By creating a statistical distribution of outcomes, the team could then work backwards. Given the mass, spin, and velocity of a “forbidden” black hole relative to its environment, what were the properties of its black hole ancestors? When the authors applied this to the progenitors of GW190521, they found that the only possible ancestors would have given a relatively large recoil velocity. This means that the merger must have occurred within the region of an active galactic nucleus, where the gravitational well would be strong enough to hold the system together.
This work has implications for what are known as intermediate mass black holes (IMBHs), which can have masses of hundreds or thousands of Suns. It has been thought that IMBHs form within globular clusters, but if the recoil velocities of black hole mergers are large, this would be unlikely. As this study shows, GW190521 could not have occurred in a globular cluster.
Reference: Araújo-Álvarez, Carlos, et al. “Kicking Time Back in Black Hole Mergers: Ancestral Masses, Spins, Birth Recoils, and Hierarchical-formation Viability of GW190521.” The Astrophysical Journal 977.2 (2024): 220.
The post Building the Black Hole Family Tree appeared first on Universe Today.
Need to Accurately Measure Time in Space? Use a COMPASSO
Telling time in space is difficult, but it is absolutely critical for applications ranging from testing relativity to navigating down the road. Atomic clocks, such as those used on the Global Navigation Satellite System network, are accurate, but only up to a point. Moving to even more precise navigation tools would require even more accurate clocks. There are several solutions at various stages of technical development, and one from Germany’s DLR, COMPASSO, plans to prove quantum optical clocks in space as a potential successor.
There are several problems with existing atomic clocks – one has to do with their accuracy, and one has to do with their size, weight, and power (SWaP) requirements. Current atomic clocks used in the GNSS are relatively compact, coming in at around .5 kg and 125 x 100 x 40 mm, but they lack accuracy. In the highly accurate clock world terminology, they have a “stability” of 10e-9 over 10,000 seconds. That sounds absurdly accurate, but it is not good enough for a more precise GNSS.
Alternatives, such as atomic lattice clocks, are more accurate, down to 10e-18 stability for 10,000. However, they can measure .5 x .5 x .5m and weigh hundreds of kilograms. Given satellite space and weight constraints, those are way too large to be adopted as a basis for satellite timekeeping.
Rendering of a passive hydrogen maser atomic clock.To find a middle ground, ESA has developed a technology development roadmap focusing on improving clock stability while keeping it small enough to fit on a satellite. One such example of a technology on the roadmap is a cesium-based clock cooled by lasers and combined with a hydrogen-based maser, a microwave laser. NASA is not missing out on the fun either, with its work on a mercury ion clock that has already been orbitally tested for a year.
COMPASSO hopes to surpass them all. Three key technologies enable the mission: two iodine frequency references, a “frequency comb,” and a “laser communication and ranging terminal.” Ideally, the mission will be launched to the ISS, where it will sit in space for two years, constantly keeping time. The accuracy of those measurements will be compared to alternatives over that time frame.
Lasers are the key to the whole system. The iodine frequency references display the very distinct absorption lines of molecular iodine, which can be used as a frequency reference for the frequency comb, a specialized laser whose output spectrum looks like it has comb teeth at specific frequencies. Those frequencies can be tuned to the frequency of the iodine reference, allowing for the correction of any drift in the comb.
engineerguy explains how atomic clocks work with the GNSS.Credit – engineerguy YouTube Channel
The comb then provides a method for phase locking for a microwave oscillator, a key part of a standard atomic clock. Overall, this means that the stability of the iodine frequency reference is transferred to the frequency comb, which is then again transferred to the microwave oscillator and, therefore, the atomic clock. In COMPASSO’s case, the laser communication terminal is used to transmit frequency and timing information back to a ground station while it is active.
COMPASSO was initially begun in 2021, and a paper describing its details and some breadboarding prototypes were released this year. It will hop on a ride to the ISS in 2025 to start its mission to make the world a more accurately timed place—and maybe improve our navigation abilities as well.
Learn More:
Kuschewski et al – COMPASSO mission and its iodine clock: outline of the clock design
UT – Atomic Clocks Separated by Just a few Centimetres Measure Different Rates of Time. Just as Einstein Predicted
UT – Deep Space Atomic Clocks Will Help Spacecraft Answer, with Incredible Precision, if They’re There Yet
UT – A New Atomic Clock has been Built that Would be off by Less than a Second Since the Big Bang
Lead Image:
Benchtop prototype of part of the COMPASSO system.
Credit – Kuschewski et al
The post Need to Accurately Measure Time in Space? Use a COMPASSO appeared first on Universe Today.
A Binary Star Found Surprisingly Close to the Milky Way's Supermassive Black Hole
Binary stars are common throughout the galaxy. Roughly half the stars in the Milky Way are part of a binary or multiple system, so we would expect to find them almost everywhere. However, one place we wouldn’t expect to find a binary is at the center of the galaxy, close to the supermassive black hole Sagittarius A*. And yet, that is precisely where astronomers have recently found one.
There are several stars near Sagittarius A*. For decades, we have watched as they orbit the great gravitational well. The motion of those stars was the first strong evidence that Sag A* was indeed a black hole. At least one star orbits so closely that we can see it redshift as it reaches peribothron.
But we also know that stars should be ever wary of straying too close to the black hole. The closer a star gets to the event horizon of a black hole, the stronger the tidal forces on the star become. There is a point where the tidal forces are so strong a star is ripped apart. We have observed several of these tidal disruption events (TDEs), so we know the threat is very real.
Tidal forces also pose a threat to binary stars. It wouldn’t take much for the tidal pull of a black hole to disrupt binary orbits, causing the stars to separate forever. Tidal forces would also tend to disrupt the formation of binary stars in favor of larger single stars. Therefore astronomers assumed the formation of binary stars near Sagittarius A* wasn’t likely, and even if a binary formed, it wouldn’t last long on cosmic timescales. So astronomers were surprised when they found the binary system known as D9.
Distance and age of D9 in the context of basic dynamical processes and stellar populations in the Galactic center. Credit: Peißker et alThe D9 system is young, only about 3 million years old. It consists of one star of about 3 solar masses and the other with a mass about 75% that of the Sun. The orbit of the system puts it within 6,000 AU of Sag A* at its closest approach, which is surprisingly close. Simulations of the D9 system estimate that in about a million years, the black hole’s gravitational influence will cause the two stars to merge into a single star. But even this short lifetime is unexpected, and it shows that the region near a supermassive black hole is much less destructive than we thought.
It’s also pretty amazing that the system was discovered at all. The center of our galaxy is shrouded in gas and dust, meaning that we can’t observe the area in the visible spectrum. We can only see stars in the region with radio and infrared light. The binary stars are too close together for us to identify them individually, so the team used data from the Enhanced Resolution Imager and Spectrograph (ERIS) on the ESO’s Very Large Telescope, as well as archive data from the Spectrograph for INtegral Field Observations in the Near Infrared (SINFONI). This gave the team data covering a 15-year timespan, which was enough to watch the light of D9 redshift and blueshift as the stars orbit each other every 372 days.
Now that we know the binary system D9 exists, astronomers can look for other binary stars. This could help us solve the mystery of how such systems can form so close to the gravitational beast at the heart of our galaxy.
Reference: Peißker, Florian, et al. “A binary system in the S cluster close to the supermassive black hole Sagittarius A.” Nature Communications 15.1 (2024): 10608.
The post A Binary Star Found Surprisingly Close to the Milky Way's Supermassive Black Hole appeared first on Universe Today.
New Research Suggests Io Doesn’t Have a Shallow Ocean of Magma
Jupiter’s moon Io is the most volcanically active body in the Solar System, with roughly 400 active volcanoes regularly ejecting magma into space. This activity arises from Io’s eccentric orbit around Jupiter, which produces incredibly powerful tidal interactions in the interior. In addition to powering Io’s volcanism, this tidal energy is believed to support a global subsurface magma ocean. However, the extent and depth of this ocean remains the subject of debate, with some supporting the idea of a shallow magma ocean while others believe Io has a more rigid, mostly solid interior.
In a recent NASA-supported study, an international team of researchers combined data from multiple missions to measure Io’s tidal deformation. According to their findings, Io does not possess a magma ocean and likely has a mostly solid mantle. Their findings further suggest that tidal forces do not necessarily lead to global magma oceans on moons or planetary bodies. This could have implications for the study of exoplanets that experience tidal heating, including Super-Earths and exomoons similar to Io that orbit massive gas giants.
The study was led by Ryan Park, a Senior Research Scientist and Principal Engineer at NASA’s Jet Propulsion Laboratory (JPL). He was joined by multiple colleagues from NASA JPL, the Centro Interdipartimentale di Ricerca Industriale Aerospaziale (CIRI) at the Università di Bologna, the National Institute for Astrophysics (NIAF), the Sapienza Università di Roma, the Southwest Research Institute (SwRI), and NASA’s Goddard Space Flight Center, and multiple universities. Their findings were described in a paper that appeared in the journal Nature.
An amazingly active Io, Jupiter’s “pizza moon,” shows multiple volcanoes and hot spots, as seen with Juno’s infrared camera. Credit: NASA/JPL-Caltech/SwRI/ASI/INAF/JIRAM/Roman TkachenkoAs they explain in their paper, two types of analysis have predicted the existence of a global magma ocean. On the one hand, magnetic induction measurements conducted by the Galileo mission suggested the existence of a magma ocean within Io, approximately 50 km [~30 mi] thick and located near the surface. These results also implied that about 20% of the material in Io’s mantle is melted. However, these results were subjected to debate for many years. In recent years, NASA’s Juno mission conducted multiple flybys of Io and the other Jovian moons and obtained data that supported this conclusion.
In particular, the Juno probe conducted a global mapping campaign of Io’s volcanoes, which suggested that the distribution of volcanic heat flow is consistent with the presence of a global magma ocean. However, these discoveries have led to considerable debate about these techniques and whether they can be used to distinguish whether a shallow global magma ocean drives Io’s volcanic activity. This is the question Park and his colleagues sought to address in their study:
“In our study, Io’s tidal deformation is modeled using the gravitational tidal Love number k2, which is defined as the ratio of the imposed gravitational potential from Jupiter to the induced potential from the deformation of Io. In short, if k2 is large, there is a global magma ocean, and if k2 is small, there is no global magma ocean. Our result shows that the recovered value of k2 is small, consistent with Io not having a global magma ocean.”
The significance of these findings goes far beyond the study of Io and other potentially volcanic moons. Beyond the Solar System, astronomers have discovered countless bodies that (according to current planetary models) experience intense tidal heating. This includes rocky exoplanets that are several times the size and mass of Earth (Super-Earths) and in the case of tidally-locked planets like the TRAPPIST-1 system. These findings are also relevant for the study of exomoons that also experience intense tidal heating (similar to the Jovian moons). As Park explained:
“Although it is commonly assumed among the exoplanet community that intense tidal heating may lead to magma oceans, the example of Io shows that this need not be the case. Our results indicate that tidal forces do not universally create global magma oceans, which may be prevented from forming due to rapid melt ascent, intrusion, and eruption, so even strong tidal heating – like that expected on several known exoplanets and super-Earths – may not guarantee the formation of magma oceans on moons or planetary bodies.”
Further Reading: Nature
The post New Research Suggests Io Doesn’t Have a Shallow Ocean of Magma appeared first on Universe Today.
The Mysterious Case of the Resurrected Star
The star HD 65907 is not what it appears to be. It’s a star that looks young, but on closer inspection is actually much, much older. What’s going on? Research suggests that it is a resurrected star.
Astronomers employ different methods to measure a star’s age. One is based on its brightness and temperature. All stars follow a particular path in life, known as the main sequence. The moment they begin fusing hydrogen in their cores, they maintain a strict relationship between their brightness and temperature. By measuring these two properties, astronomers can roughly pin down the age of a star. But there are other techniques, like measuring the amount of heavy elements in a stellar atmosphere. Older stars tend to have fewer of these elements, because they were born at a time before the galaxy had become enriched with them.
Going by its temperature and brightness, HD 65907 is relatively young, with an age right around 5 billion years old. And yet it contains very little heavy elements. Plus, its path in the galaxy isn’t in line with other young stars, which tend to serenely orbit around the center. HD 65907 is much more erratic, suggesting that it only recently moved here from somewhere else.
In a recent paper, an international team of astronomers dug into archival data to see if they could resolve the mystery, and they believe that HD 65907 is a kind of star known as a blue straggler, and that it has its strange combination of properties because of a violent event in its past, causing it to be resurrected.
If two low-mass stars collide, the remnants can sometimes survive as a star on its own. At first that newly merged star will be both massive and large, with its outer surface flung far away from the core due to the enormous rotation after the collision. But eventually some astrophysical process (perhaps strong magnetic fields might be to blame) drag down the rotation rate of the star, causing it to slow down and settle into equilibrium. In this new state the star will appear massive and incredibly hot: a blue straggler.
No matter what, blue straggler stars get a second chance on life. Those mergers transform small stars into big stars, and they’re just now enjoying their hydrogen-burning main sequence lives.
The astronomers believe this is the case for HD 65907. What makes this star especially unique is that it’s not a member of a cluster, where frequent mergers can easily lead to blue stragglers. Instead, it’s a field star, wandering the galaxy on its own. It must have cannibalized a companion five billion years ago, leading to its apparent youthful age.
Work like this is essential to untangling the complicated lives of stars in the Milky Way, and it shows how the strangest stars hold the keys to unlocking the evolution of elements that lead to systems like our own.
The post The Mysterious Case of the Resurrected Star appeared first on Universe Today.
The JWST Looked Over the Hubble’s Shoulder and Confirmed that the Universe is Expanding Faster
It’s axiomatic that the Universe is expanding. However, the rate of expansion hasn’t remained the same. It appears that the Universe is expanding more quickly now than it did in the past.
Astronomers have struggled to understand this and have wondered if the apparent acceleration is due to instrument errors. The JWST has put that question to rest.
American astronomer Edwin Hubble is widely credited with discovering the expansion of the Universe. But it actually stemmed from relativity equations and was pioneered by Russian scientist Alexander Freedman. Hubble’s Law bears Edwin’s name, though, and he was the one who confirmed the expansion, called Hubble’s constant, and put a more precise value to it. It measures how rapidly galaxies that aren’t gravitationally bound are moving away from one another. The movement of objects due solely to the Hubble constant is called the Hubble flow.
Measuring the Hubble constant means measuring distances to far-flung objects. Astronomers use the cosmic distance ladder (CDL) to do that. However, the ladder has a problem.
This illustration shows the three basic steps astronomers use to calculate how fast the universe expands over time, a value called the Hubble constant. All the steps involve building a strong “cosmic distance ladder” by starting with measuring accurate distances to nearby galaxies and then moving to galaxies farther and farther away. Image Credit: NASA, ESA and A. Feild (STScI)The first rungs on the CDL are fundamental measurements that can be observed directly. Parallax measurement is the most important fundamental measurement. But the method breaks down at great distances.
Beyond that, astronomers use standard candles, things with known intrinsic brightness, like supernovae and Cepheid variables. Those objects and their relationships help astronomers measure distances to other galaxies. This has been tricky to measure, though advancing technology has made progress.
Another pair of problems plagues the effort, though. The first is that different telescopes and methods produce different distance measurements. The second is that our measurements of distances and expansion don’t match up with the Standard Model of Cosmology, also known as the Lambda Cold Dark Matter (LCDM) model. That discrepancy is called the Hubble tension.
The question is, can the mismatch between the measurements and the LCDM be explained by instrument differences? That possibility has to be eliminated, and the trick is to take one large set of distance measurements from one telescope and compare them to another.
New research in The Astrophysical Journal tackles the problem by comparing Hubble Space Telescope measurements with JWST measurements. It’s titled “JWST Validates HST Distance Measurements: Selection of Supernova Subsample Explains Differences in JWST Estimates of Local H0.” The lead author is Adam Riess, a Bloomberg Distinguished Professor and Thomas J. Barber Professor of Physics and Astronomy at Johns Hopkins University. Riess is also a Nobel laureate, winning the 2011 Nobel Prize in Physics “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae,” according to the Nobel Institute.
As of 2022, the Hubble Space Telescope gathered the most numerous sample of homogeneously measured standard candles. It measured a large number of standard candles out to about 40 Mpc or about 130 million light-years. “As of 2022, the largest collection of homogeneously measured SNe Ia is complete to D less than or equal to 40 Mpc or redshift z less than or equal to 0.01,” the authors of the research write. “It consists of 42 SNe Ia in 37 host galaxies calibrated with observations of Cepheids with the Hubble Space Telescope (HST), the heritage of more than 1000 orbits (a comparable number of hours) invested over the last ~20 yrs.”
In this research, the astronomers used the powerful JWST to cross-check the Hubble’s work. “We cross-check the Hubble Space Telescope (HST) Cepheid/Type Ia supernova (SN Ia) distance ladder, which yields the most precise local H0 (Hubble flow), against early James Webb Space Telescope (JWST) subsamples (~1/4 of the HST sample) from SH0ES and CCHP, calibrated only with NGC 4258,” the authors write. SH0ES and CCHP are different observing efforts aimed at measuring the Hubble constant. SH0ES stands for Supernova H0 for the Equation of State of Dark Energy, and CCHP stands for Chicago-Carnegie Hubble Program, which uses the JWST to measure the Hubble constant.
“JWST has certain distinct advantages (and some disadvantages) compared to HST for measuring distances to nearby galaxies,” Riess and his co-authors write. It offers a 2.5 times higher near-infrared resolution than the HST. Despite some of its disadvantages, the JWST “is able to provide a strong cross-check of distances in the first two rungs,” the authors explain.
Observations from both telescopes are closely aligned, which basically minimizes instrument error as the cause of the discrepancy between observations and the Lambda CDM model.
There’s a lot to digest in this figure from the research. It shows “Comparisons of H0 between HST Cepheids and other measures (JWST Cepheids, JWST JAGB, and JWST NIR-TRGB) for SN Ia host subsamples selected by different teams and for the different methods,” the authors explain. JAGB stands for J-region Asymptotic Giant Branch, and TRGB stands for Tip of the Red Giant Branch. Both JAGB and TRGB are ways of measuring distance to specific types of stars. Basically, coloured circles represent Hubble measurements, and squares represent JWST measurements. “The HST Cepheid and JWST distance measurements themselves are in good agreement,” the authors write. Image Credit: Riess et al. 2024.“While it will still take multiple years for the JWST sample of SN hosts to be as large as the HST sample, we show that the current JWST measurements have already ruled out systematic biases from the first rungs of the distance ladder at a much smaller level than the Hubble tension,” the authors write.
This research covered about one-third of the Hubble’s data set, with the known distance to a galaxy called NGC 4258 serving as a reference point. Even though the data set was small, Riess and his co-researchers achieved impressively precise results. They showed that the measurement differences were less than 2%. That’s much less than the 8% to 9% in the Hubble tension discrepancy.
NGC 4258 is significant in the cosmic distance ladder because it contains Cepheid variables similar to both the metallicities of the Milky Way and other galaxies’ Cepheids. Astronomers use it to calibrate distances to Cepheids with different metallicities. A new composite of NGC 4258 features X-rays from Chandra (blue), radio waves from the VLA (purple), optical data from Hubble (yellow and blue), and infrared with Spitzer (red). Image Credit: ChandraThat means that our Lamda CDM model is missing something. The standard model yields an expansion rate of about 67 to 68 kilometres per second per megaparsec. Telescope observations yield a slightly higher rate: between 70 and 76 kilometres per second per megaparsec. This work shows that the discrepancy can’t be due to the different telescopes and methods.
“The discrepancy between the observed expansion rate of the universe and the predictions of the standard model suggests that our understanding of the universe may be incomplete. With two NASA flagship telescopes now confirming each other’s findings, we must take this [Hubble tension] problem very seriously—it’s a challenge but also an incredible opportunity to learn more about our universe,” said lead author Riess.
What could be missing from the Lambda CDM model?
Marc Kamionkowski is a Johns Hopkins cosmologist who helped calculate the Hubble constant and recently developed a possible new explanation for the tension. Though not part of this research, he commented on it in a press release.
“One possible explanation for the Hubble tension would be if there was something missing in our understanding of the early universe, such as a new component of matter—early dark energy—that gave the universe an unexpected kick after the big bang,” said Kamionkowski. “And there are other ideas, like funny dark matter properties, exotic particles, changing electron mass, or primordial magnetic fields that may do the trick. Theorists have license to get pretty creative.”
The door is open, theorists just have to walk in.
The post The JWST Looked Over the Hubble’s Shoulder and Confirmed that the Universe is Expanding Faster appeared first on Universe Today.
Astronaut Don Pettit is Serious, He Rigged up Astrophotography Gear on the ISS
Astrophotography is a challenging art. Beyond the usual skill set of understanding things such as light exposure, color balance, and the quirks of your kit, there is the fact that stars are faint and they move.
Technically, the stars don’t move; the Earth rotates. But to capture a faint object, you need a long exposure time. Typically, from a few seconds to half a minute, depending on the level of detail you want to capture. In thirty seconds, the sky will shift by more than a tenth of a degree. That might not seem like much, but it’s enough to make the stars blur ever so slightly. Many astrophotographers take multiple images and stack them for even greater detail, which would blur things even more. It can create an interesting effect, but it doesn’t give you a panorama of pinpoint stars.
The motion blur of starlight used to create a rain of stars. Credit: Diana Juncher/ESOFortunately, there is plenty of off-the-shelf equipment you can get to account for motion blur. There are tracking motors you can mount to your camera that move your frame in time with the Earth’s rotation. They are incredibly precise so that you can capture image after image for hours, and your camera will always be perfectly aligned with the sky. If you make your images into a movie, the stars will remain fixed while the Earth rotates beneath them.
Of course, most astrophotographers have the same limitations of almost everyone. We are bound to the Earth and can only view the stars through our blanket of sky. If we could rise above the atmosphere, we would have an unburdened view of the heavens. A sky filled with uncountable, untwinkling stars. While astronauts often talk about this wondrous sight, photographs of stars from orbit are often less than spectacular. That’s because of how difficult astrophotography is in space, and it all comes back to motion blur.
Most astrophotography is done from the International Space Station (ISS). Since the ISS is in a relatively low orbit, it travels around the Earth once every 90 minutes. This means the stars appear to drift at a rate 16 times faster than they do on Earth. A 30-second exposure on the ISS has greater motion blur than an eight minute exposure on Earth. Because of this, most photographs from the ISS either have blurry stars or only capture the brightest stars.
Don Pettit’s Homemade Orbital Sidereal Tracker. Credit: Don PettitIdeally, an astronaut astrophotographer would bring along a camera mount similar to the ones used on Earth. But the market demand for such a mount is tiny, so you can’t just buy one from your local camera store. You have to make your own, which is precisely what astronaut Don Pettit did. Working with colleagues from RIT, he created a camera tracker that shifts by 0.064 degrees per second and can be adjusted give or take 5%. With this mount, Don has been able to capture 30-second exposures with almost no motion blur. His images rival some of the best Earth-based images, but he takes them from space!
The detail of his photographs is unprecedented. In the image above, for example, you can see the Large and Small Magellanic Clouds, and not just as fuzzy patches in the sky. You can see individual stars within the clouds. The image also gives an excellent view of an effect known as airglow. Molecules in the upper atmosphere are ionized by sunlight and cosmic rays, which means this layer always has a faint glow to it. No matter how skilled a terrestrial astrophotographer is, their images will always have a bit of this glow.
Airglow from different molecules in the upper atmosphere. Credit: NASA/annotations by Alex RivestBut not Don Pettit. He’s currently on the ISS, capturing outstanding photographs as a side hobby from his day job. If you want to see more of his work, check him out on Reddit, where he posts under the username astro_pettit.
The post Astronaut Don Pettit is Serious, He Rigged up Astrophotography Gear on the ISS appeared first on Universe Today.
Drone Test Flights Are Being Tested for Flights on Alien Worlds
We’ve already seen the success of the Ingenuity probe on Mars. The first aircraft to fly on another world set off on its maiden voyage in April 2021 and has now completed 72 flights. Now a team of engineers are taking the idea one step further and investigating ways that drones can be released from satellites in orbit and explore the atmosphere without having to land. The results are positive and suggest this could be a cost effective way to explore alien atmospheres.
The idea of using drones on alien worlds has been enticing engineers and planetary explorers for a few years now. They’re lightweight and versatile and an excellent cost effective way to study the atmosphere of the planets. Orbiters and rovers have been visiting the planets for decades now but drones can explore in ways rovers and orbiters cannot. Not only will they be useful to study atmospheric effects but they will be able to reach inaccessible surface areas providing imagery to inform potential future land based study.
Illustration of Perseverance on MarsPerhaps one of the most famous, indeed the only successful planetary drone to date is the Ingenuity drone which was part of the Perseverance rover mission. It was able to demonstrate that controlled flight in the Martian atmosphere was possible, could hunt out possible landing sites for future missions and direct ground based exploration. It’s iconic large wingspan was needed due to the rarefied atmosphere on Mars requiring larger rotor blades to generate the required lift. Ingenuity was originally planned as a technology demonstration mission but it soon became a useful tool in the Perseverance mission arsenal.
Ingenuity helicopterNASA engineers are very aware of the benefits of drone technology and so a team of engineers and researchers from the Armstrong Flight Research Center in California have been taking the idea of small drones one step further. The research was part of the Center Innovation Fund award from 2023 and began as the team developed three atmospheric probe models. The models were all the same, measuring 71 cm from top to bottom, one for visual demonstration, the other two for research and technology readiness.
Their first launch on 1 August didn’t go to plan with a failure in the release mechanism. The team reviewed everything from the lifting aircraft, the release mechanism and even the probe design itself to identify improvements. The team were finally able to conduct flights with their new atmospheric probe after it was released from a quad rotor remotely piloted aircraft on 22 October 2024.
The flights were conducted above the Rogers Dry Lake near in California with designs informed by previous NASA instrumentation designed for lifting and transportation. The test flights were aiming to prove the shape of the probe worked. The team now want to release the probe from a higher altitude, ultimately hoping to be able to release it from a satellite in orbit around a planet.
The next steps are to review photos and videos from the flight to identify further improvements before another probe is built. Once they have probed the flight technology, instrumentation will be added to facilitate data gathering and recording. If all goes to plan then the team hope to be chosen for a mission to one of the planets, be released in orbit and then dive into the atmosphere under controlled flight to learn more about the environment.
Source : Atmospheric Probe Shows Promise in Test Flight
The post Drone Test Flights Are Being Tested for Flights on Alien Worlds appeared first on Universe Today.
One of the Most Interesting Exoplanets Just Got Even More Interesting!
Since the discovery of the first exoplanet in 1992, thousands more have been discovered. 40 light years away, one such system of exoplanets was discovered orbiting a star known as Trappist-1. Studies using the James Webb Space Telescope have revealed that one of the planets, Trappist-1 b has a crust that seems to be changing. Geological activity and weathering are a likely cause and if the latter, it suggests the exoplanet has an atmosphere too.
Exoplanets are planets that orbit around other stars. In every way they vary in size, composition and distance from their star. Finding them is quite a tricky undertaking and there are a number of different approaches that are used. Since the first discovery, over 5,000 exoplanets have been found and now of course, the hunt is on to find planets that could sustain life. Likely candidates would be orbiting their host star in a region known as the habitable zone where the temperature is just right for a life sustaining world to evolve.
This illustration shows what the hot rocky exoplanet TRAPPIST-1 b could look like. A new method can help determine what rocky exoplanets might have large reservoirs of subsurface water. Credits: NASA, ESA, CSA, J. Olmsted (STScI)There are three exoplanets in the Trappist-1 system that orbit the star within the habitable zone; Trappist-1e, f and g. The star is a cool dwarf star in the constellation of Aquarius and was identified as being a host of exoplanets in 2017. The discoveries were made using data from NASA’s Kepler Space Telescope and the Spitzer Space Telescope. The system was named after the Transiting Planets and PlanetesImals Small Telescope (TRAPPIST.)
The Spitzer Space Telescope observatory trails behind Earth as it orbits the Sun. Credit: NASA/JPL-CaltechA team of researchers from the Max Planck Institute for Astronomy and the Commissariat aux Énergies Atomiques (CEA) in Paris have been studying Trappist-1b. They have been using the Mid-Infrared Imager of the James Webb Space Telescope to measure thermal radiation from the exoplanet. Their findings have been published in the journal Nature Astronomy. Previous studies concluded that Trappist-1b was a dark rocky planet that and no atmosphere. The new study has turned this conclusion on its head.
The measurements found by the team revealed something else. They found a world with a surface composed of largely unchanged material. Typically the surface of a world with no atmosphere is weathered by radiation and peppered with impacts from meteorites. The study found that the surface materials is around 1,000 years old, much younger than the planet itself which is thought to be several billion years old.
The team postulate that this could indicate volcanic activity or plate tectonics since the planet has sufficient size to still retain internal heat from its formation. It’s also possible that the observations reveal a thick atmosphere rich in carbon dioxide. The observations suggested at first that there was no layer of carbon dioxide since they found no evidence of thermal radiation absorption. They ran models however to show that atmospheric haze can reverse the temperature profile of a carbon dioxide rich atmosphere. Typically the ground is the warmest region but in the case of Trappist-1b, it may be that the atmosphere absorbs radiation, this heats the upper layers which radiates the infrared energy itself. A similar process is seen on Saturn’s moon Titan.
Fortunately, the alignment of the planetary system means that it passes directly in front of the star so that spectroscopic observations and the dimming of starlight as the planet passes in front can reveal the profile of the atmosphere. Further studies are now underway to explore this and take further observations to conclude the nature of the atmosphere around Trappist-1b.
Source : Does the exoplanet Trappist-1 b have an atmosphere after all?
The post One of the Most Interesting Exoplanets Just Got Even More Interesting! appeared first on Universe Today.
Zwicky Classifies More Than 10,000 Exploding Stars
Even if you knew nothing about astronomy, you’d understand that exploding stars are forceful and consequential events. How could they not be? Supernovae play a pivotal role in the Universe with their energetic, destructive demises.
There are different types of supernovae exploding throughout the Universe, with different progenitors and different remnants. The Zwicky Transient Facility has detected 100,000 supernovae and classified 10,000 of them.
The Zwicky Transient Facility (ZTF) is a wide-field astronomical survey named after the prolific Swiss astronomer Fritz Zwicky. In the early 1930s, Zwicky and his colleague Walter Baade coined the term ‘supernova’ to describe the transition of normal main sequence stars into neutron stars. In the 1940s, Zwicky and his colleague developed the modern supernova classification system. The ZTF bears his name because of these and many other scientific contributions. (Zwicky was also a humanitarian and a philosopher.)
The ZTF observes in both optical and infrared and was built to detect transients with the Samuel Oschin Telescope at the Palomar Observatory in San Diego County, California. Transients are objects that change brightness rapidly or objects that move. While supernovae (SN) don’t move, they definitely change brightness rapidly. They can outshine their entire host galaxy for months.
In 2017, the ZTF began its Bright Transient Survey (BTS), an effort dedicated to the search for supernovae (SNe). It’s by far the largest spectroscopic SNe survey ever conducted. The BTS has discovered 100,000 potential SNe, and more than 10,000 of them have been confirmed and classified according to distance, type, rarity, and brightness. These types of astronomical surveys create a rich dataset that will aid researchers well into the future.
“There are trillions of stars in the universe, and about every second, one of them explodes. Reaching 10,000 classifications is amazing, but what we truly should celebrate is the incredible progress we have made in our ability to browse the universe for transients, or objects that change in the sky, and the science our rich data will enable,” said Christoffer Fremling, a staff astronomer at Caltech. Fremling leads the ZTF’s Bright Transient Survey (BTS).
The effort to catalogue supernovae dates back to 2012 when astronomical databases began officially tracking them. Since then, astronomers have detected nearly 16,000 of them, and the ZTF is responsible for more than 10,000 of those detections.
The first documented SNe discovery was in 185 AD when Chinese astronomers recorded the appearance of a ‘guest star’ in the sky that shone for eight months. In the nearly two millennia since then, we’ve seen many more. 1987 was a watershed year for supernovae science when a massive star exploded in the nearby Large Magellanic Cloud. Named SN 1987A. it was the first supernova explosion since the telescope was invented. This was also the first direct detection of neutrinos from a supernova, and the detection is considered by many to be the beginning of neutrino astronomy.
A timeline of important events in the history of supernova astronomy. Click to enlarge. Image Credit: ZTF/Caltech/NSFEach night, the ZTF detects hundreds of thousands of events, including everything from small, simple asteroids in our inner Solar System to powerful gamma-ray bursts in the distant Universe. The ZTF uses a pair of telescopes that act as a kind of ‘triage’ facility for supernovae and transients. The Samuel Oschin Telescope has a 60-megapixel wide field camera that images the visible sky every two nights. Astronomers detect new transient events by subtracting images of the same portion of the sky from subsequent scans.
Then, members of the ZTF team study these images and send the most promising to the other ZTF telescope, the Spectral Energy Distribution Machine (SEDM). This robotic spectrograph operates on the Palomar 60-inch telescope.
“We combine the brightness information from the ZTF camera with the data from the SEDM to correctly identify the origin and type of a transient, a process astronomers call transient classification,” said Yu-Jing Qin, a postdoc at Caltech, who is running much of the daily operations of the BTS survey.
ZTF Detections are also sent to other observatories around the world who can examine transients with other spectroscopic facilities. About 30% of the ZTF transients have been confirmed this way.
ZTF detects so many transients that it’s difficult for astronomers to keep up. In recent years, Caltech has made an effort to develop machine-learning tools that can examine SEDM spectroscopic data, classify the transients, and send them to the Transient Name Server. In 2023, the BTSBot system was employed to help manage the flow of detections.
“Since BTSbot began operation it has found about half of the brightest ZTF supernovae before a human,” said PhD student Nabeel Rehemtulla from Northwestern University, developer of the BTSBot. “For specific types of supernovae, we have automated the entire process and BTSbot has so far performed excellently in over a hundred cases. This is the future of supernova surveys, especially when the Vera Rubin Observatory begins operations.”
Though every supernova discovery is scientifically valuable, there are some highlights among all these detections.
The ZTF has detected thousands of Type 1a supernovae. They occur in binary systems where one star is a white dwarf. The white dwarf draws gas away from its companion and the gas gathers on the white dwarf. Eventually, this causes a supernova explosion. SN 2022qmx is one of these Type 1a supernovae that appeared to be way brighter than it should be. It turns out that an interceding galaxy was gravitationally lensing the SN’s light, making it appear 24 times brighter.
The ZTF is also responsible for detecting the closest and most distant SNe (with help from the JWST).
Some highlights from the ZTF’s 10,000 supernovae. Click the image to enlarge. Image Credit: ZTF/Caltech/NSF“Back when we started this project, we didn’t know how many astronomers would follow up on our detections,” said Caltech’s Fremling. “To see that so many have is a testament to why we built ZTF: to survey the whole sky for changing objects and share those data as rapidly as possible with astronomers around the world. That’s the purpose of the Transient Name Server (TNS).”
The TNS is where the global astronomical community announces the detection and classification of transients so that work isn’t duplicated. Since 2016, the TNS has handled over 150,000 reported transients and over 15,000 reported supernovae.
“Everything is public in hopes that the community will come together and make the most of it,” said Fremling. “This way, we don’t have, say, 10 telescopes across the world doing the same thing and wasting time.”
Soon, the ZTF will have a powerful partner in time-domain astronomy. The Vera Rubin Observatory (VRO) should see its first light in the next few months and then begin its 10-year Legacy Survey of Space and Time (LSST). The LSST will also detect transients but is far more sensitive than the ZTF. It’s expected to detect millions of supernovae, and handling all of those detections will require a machine-learning tool similar to the BTSbot.
“The machine learning and AI tools we have developed for ZTF will become essential when the Vera Rubin Observatory begins operations,” said Daniel Perley, an astronomer at Liverpool John Moores University in the UK who developed the search and discovery procedures for the BTS. “We have already planned to work closely with Rubin to transfer our machine learning knowledge and technology,” added Perley.
Astronomical surveys like the ones performed by ZTF and the VRO provide foundational data that researchers will use for years. It’s impossible to know how it will be used in every case or what discoveries it will lead to. Even better, the ZTF and the VRO will overlap.
According to Caltech astronomy professor Mansi Kasliwal, who will lead ZTF in the coming two years, this will be a very important and exciting time in time-domain astronomy.
“The period in 2025 and 2026 when ZTF and Vera Rubin can both operate in tandem is fantastic news for time-domain astronomers,” said Kasliwal. “Combining data from both observatories, astronomers can directly address the physics of why supernovae explode and discover fast and young transients that are inaccessible to ZTF or Rubin alone. I am excited about the future,” added Kasliwal.
The post Zwicky Classifies More Than 10,000 Exploding Stars appeared first on Universe Today.
What is the Zoo Hypothesis?
It seems that we are completely alone in the universe. But simple reasoning suggests that there should be an abundance of alien civilizations. Maybe they’re all out there, but they are keeping their distance. Welcome to the zoo (hypothesis).
The story goes that in the summer of 1950, eminent physicist Enrico Fermi was visiting colleagues at Los Alamos National Laboratory. It was the initial peak of UFO mania, and naturally the physicists brought it up over lunch. After a short while, Fermi went silent. Later, well after the conversation had turned to other topics, he exclaimed “Where is everybody?”
Everybody knew what he meant. We know that the universe is capable of producing intelligent life. We’re literally living proof of that. But the cosmos tends to not do things just once. If life happened here, it likely also happened elsewhere. In fact, given the extraordinary age of the universe and the incredible number of stars and planetary systems in any given galaxy, the Milky Way should be abuzz with intelligent space-faring civilizations.
Humanity itself is right on the cusp of developing a sustained interplanetary presence, and our species is still in its youth, at least as cosmic reckoning is concerned. We should see evidence for other intelligent species everywhere: radio signals, megastructures, wandering probes, and so on.
But we’ve got nothing. So where is everybody?
Perhaps the strangest possible solution to Fermi’s paradox, as this conundrum came to be known, is known as the zoo hypothesis. In this idea, alien life is indeed common, as is intelligence. There really is no huge barrier to intelligent creatures developing spaceflight capabilities and spreading themselves throughout the galaxy.
But the reason that we don’t see anybody is that they are intentionally hiding themselves from us. Through their sophisticated observations, they can easily tell that we are intelligent ourselves, but also somewhat dangerous. After all, we have peaceful space rockets and dangerous ICBMs. We are just dipping our toes into space, and we may not be exactly trustworthy.
And so the intelligent civilizations of the galaxy are keeping us in a sort of “zoo.” They are masking themselves and their signals, making us think that we’re all alone, largely confined to our own solar system and a few nearby stars.
Once we prove ourselves, the hypothesis goes, we’ll be welcomed into the larger galactic community with open arms (or tentacles).
The zoo hypothesis is, honestly, a little far-fetched. It assumes not only the existence of alien civilizations, but also their motives and intentions. But we ultimately do not know if we are alone in the universe. And there’s only one way to find out.
The post What is the Zoo Hypothesis? appeared first on Universe Today.
A New Study Suggests How we Could Find Advanced Civilizations that Ran Out of Fusion Fuel
When it comes to our modern society and the many crises we face, there is little doubt that fusion power is the way of the future. The technology not only offers abundant power that could solve the energy crisis, it does so in a clean and sustainable way. At least as long as our supplies of deuterium (H2) and helium-3 hold up. In a recent study, a team of researchers considered how evidence of deuterium-deuterium (DD) fusion could be used as a potential technosignature in the Search for Extraterrestrial Intelligence (SETI).
The study was conducted by David C. Catling and Joshua Krissansen-Totton of the Department of Earth & Space Sciences and the Virtual Planetary Laboratory (VPL) at the University of Washington (respectively) and Tyler D. Robinson of the VPL and the Lunar & Planetary Laboratory (LPL) at the University of Arizona. In their paper, which is set to appear in the Astrophysical Journal, the team considered how long-lived extraterrestrial civilizations may deplete their supplies of deuterium – something that would be detectable by space telescopes.
At the heart of SETI lies the foregone conclusion that advanced civilizations have existed in our galaxy long before humanity. Another conclusion extends from this: if humanity can conceive of something (and the physics are sound), a more advanced civilization is likely to have already built it. In fact, it has been suggested by many SETI researchers and scientists that advanced civilizations will adopt fusion power to meet their growing energy needs as they continue to grow and ascend the Kardashev Scale.
The spherical tokamak MAST at the Culham Centre for Fusion Energy (UK). Photo: CCFEThis is understandable, considering how other forms of energy (fossil fuels, solar, wind, nuclear, hydroelectric, etc.) are either finite or inefficient. Space-based solar power is a viable option since it can provide a steady supply of energy that is not subject to intermittency or weather patterns. Nevertheless, nuclear fusion is considered a major contender for future energy needs because of its efficiency and energy density. It is estimated that one gram of hydrogen fuel could generate as much as 90,000 kilowatt-hours of energy – the equivalent of 11 metric tons (12 U.S. tons) of coal.
In addition, deuterium has a natural abundance in Earth’s oceans of about one atom of deuterium in every 6,420 atoms of hydrogen. This deuterium interacts with water molecules and will replace one or both hydrogen atoms to create “semi-heavy water” (HOD or DOH) and sometimes “heavy water” (D2O). This works out to 4.85×1013 or 48.5 billion metric tons (5.346×1013 U.S. tons) of deuterium. As they argue in their paper, extracting deuterium from an ocean would decrease its ratio of deuterium-to-hydrogen (D/H), which would be detectable in atmospheric water vapor. Meanwhile, the helium produced in the nuclear reactions would escape to space.
In recent years, it has been suggested that excess carbon dioxide and radioactive isotopes in an exoplanet’s atmosphere could be used to infer the presence of an industrial civilization. In the same vein, low values of D/H in an exoplanet’s atmosphere (along with helium) could be used to detect a highly advanced and long-lived civilization. As Catling explained in a recent interview with phys.org, this possibility is one he began pondering years ago.
“I didn’t do much with this germ of idea until I was co-organizing an astrobiology meeting last year at Green Bank Observatory in West Virginia,” he said. “Measuring the D/H ratio in water vapor on exoplanets is certainly not a piece of cake. But it’s not a pipe dream either.”
A model JWST transmission spectrum for an Earth-like planet, showing the wavelengths of sunlight that molecules like ozone (O3), water (H2O), carbon dioxide (CO2), and methane (CH4) absorb. Credit: NASA, ESA, Leah Hustak (STScI)To model what an advanced civilization dependent on DD fusion would look like, Catling and his colleagues considered projections for what Earth will look like by 2100. At this point, the global population is expected to reach 10.4 billion, and fusion power is projected to provide 100 Terawatts (TW). They then multiplied that by a factor of ten (1,000 TW) for a more advanced civilization and found that they would reduce the D/H value of an Earth-like ocean to that of the interstellar medium (ISM) in about 170 million years.
The beauty of this approach is that the low D/H values in an exoplanet’s atmosphere would persist long after a civilization went extinct, migrated off-world, or became even more advanced and “transcended.” In terms of search strategies, the team used the Spectral Mapping Atmospheric Radiative Transfer (SMART) model to identify the specific wavelengths and emission lines for HDO and H2O. These findings will be useful for future surveys involving the James Webb Space Telescope (JWST), NASA’s proposed Habitable Worlds Observatory (HWO), and the Large Interferometer For Exoplanets (LIFE).
“It’s up to the engineers and scientists designing [HWO] and [LIFE] to see if measuring D/H on exoplanets might be an achievable goal. What we can say, so far, is that looking for D/H from LIFE appears to be feasible for exoplanets with plenty of atmospheric water vapor in a region of the spectrum around 8 microns wavelength.”
Further Reading: phys.org, arXiv
The post A New Study Suggests How we Could Find Advanced Civilizations that Ran Out of Fusion Fuel appeared first on Universe Today.
We Might Finally Know How Galaxies Grow So Large
Astronomers have spent decades trying to understand how galaxies grow so large. One piece of the puzzle is spheroids, also known as galactic bulges. Spiral galaxies and elliptical galaxies have different morphologies, but they both have spheroids. This is where most of their stars are and, in fact, where most stars in the Universe reside. Since most stars reside in spheroids, understanding them is critical to understanding how galaxies grow and evolve.
New research focused on spheroids has brought them closer than ever to understanding how galaxies become so massive.
Elliptical galaxies have no flat disk component. They’re smooth and featureless and contain comparatively little gas and dust compared to spirals. Without gas and dust, new stars seldom form, so ellipticals are populated with older stars.
Astronomers don’t know how these ancient, bulging galaxies formed and evolved. However, a new research letter in Nature may finally have the answer. It’s titled “In situ spheroid formation in distant submillimetre-bright galaxies.” The lead author is Qing-Hua Tan from the Purple Mountain Observatory, Chinese Academy of Sciences, China. Dr. Annagrazia Puglisi from the University of Southampton co-authored the research.
“Our findings take us closer to solving a long-standing mystery in astronomy that will redefine our understanding of how galaxies were created in the early universe.”
Dr. Annagrazia Puglisi, University of SouthamptonThe international team of researchers used the Atacama Large Millimetre/sub-millimetre Array (ALMA) to examine highly luminous starburst galaxies in the distant Universe. Sub-millimetre means it observes electromagnetic energy between far-infrared and microwave. Astronomers have suspected for a long time that these galaxies are connected to spheroids, but observing them is challenging.
“Infrared/submillimetre-bright galaxies at high redshifts have long been suspected to be related to spheroid formation,” the authors write. “Proving this connection has been hampered so far by heavy dust obscuration when focusing on their stellar emission or by methodologies and limited signal-to-noise ratios when looking at submillimetre wavelengths.”
This image shows two of the Atacama Large Millimeter/submillimeter Array (ALMA) 12-metre antennas. ALMA has 66 antennas that work together as an interferometer. (Credit : Iztok Bonina/ESO)The researchers used ALMA to analyze more than 100 of these ancient galaxies with a new technique that measures their distribution of light. These brightness profiles show that the majority of the galaxies have tri-axial shapes rather than flat disks, indicating that something in their history made them misshapen.
Two important concepts underpin the team’s results: The Sersic index and the Spergel index.
The Sersic index is a fundamental concept in describing the brightness profiles of galaxies. It characterizes the radial distribution of light coming from galaxies and basically describes how light is concentrated in a galaxy.
The Spergel index is less commonly used. It’s based on the distribution of dark matter in galaxies. Rather than light, it helps astronomers understand how matter is concentrated. Together, both indices help astronomers characterize the complex structure of galaxies.
These indices, along with the new ALMA observations, led to new insights into how spheroids formed through mergers and the resulting influx of cold, star-forming gas.
It all starts with a galaxy collision or merger, which sends large flows of cold gas into the galactic centre.
This is a JWST image (not from this research) of an ancient galaxy merger from 13 billion years ago. The galaxy, named Gz9p3, has a double nucleus indicating that the merger is ongoing. While astronomers know that mergers are a critical part of galaxy growth and evolution, the role spheroids play has been difficult to discern. Image Credit: NASA/Boyett et al“Two disk galaxies smashing together caused gas—the fuel from which stars are formed—to sink towards their centre, generating trillions of new stars,” said co-author Puglisi. “These cosmic collisions happened some eight to 12 billion years ago when the universe was in a much more active phase of its evolution.”
“This is the first real evidence that spheroids form directly through intense episodes of star formation located in the cores of distant galaxies,” Puglisi said. “These galaxies form quickly—gas is sucked inwards to feed black holes and triggers bursts of stars, which are created at rates ten to 100 times faster than our Milky Way.”
The researchers compared their observations to hydro-simulations of galaxy mergers. The results show that the spheroids can maintain their shape for up to approximately 50 million years after the merger. “This is compatible with the inferred timescales for the submillimeter-bright bursts based on observations,” the authors write. After this intense period of star formation in the spheroid, the gas is used up, and things die down. No more energy is injected into the system, and the residual gas flattens out into a disk.
This figure from the research shows how the spheroids lose their shape after the intense period of star formation following a merger. (a) shows maps (2×2 kpc) of the central gas in three differentmergers, showing the flattest projection for these systems observed at 12 Myr from coalescence; that is, these systems are 3D spheroidal structures, not face-on disks. (b) shows the star-formation rate peaking and then dimishining over time. (c) shows C/A, which quantifies the relative system thickness encompassing all galactic components, including disks, bars, and bulges. It’s a ratio between C, the shortest axis, and A, the longest axis in a triaxial ellipsoid. Image Credit: Tan et al. 2024.
These types of galaxies were more plentiful in the early Universe than they are now. The researchers’ results show that these galaxies used up their fuel quickly, forming the spheroids that are now populated by old stars.
This isn’t the first time that astronomers have investigated the potential link between spheroids and distant submillimeter-bright galaxies. Previous research that found evidence for tri-axiality also found heavy ellipticity and other evidence showing that submillimeter-bright galaxies are disks with bars in the submillimeter. However, this new research relied on observations with a higher signal-to-noise ratio than previous research.
“Astrophysicists have sought to understand this process for decades,” Puglisi said. “Our findings take us closer to solving a long-standing mystery in astronomy that will redefine our understanding of how galaxies were created in the early universe.”
“This will give us a more complete picture of early galaxy formation and deepen our understanding of how the universe has evolved since the beginning of time.”
The post We Might Finally Know How Galaxies Grow So Large appeared first on Universe Today.
Building Concrete on Mars From Local Materials
Imagine you’ve just gotten to Mars as part of the first contingent of settlers. Your first challenge: build a long-term habitat using local materials. Those might include water from the polar caps mixed with specific surface soils. They might even require some very personal contributions—your blood, sweat, and tears. Using such in situ materials is the challenge a team of Iranian engineers studied in a research project looking at local materials on Mars.
In situ resource utilization has always been part of Mars mission and colonization scenarios. It’s expensive to bring along habitat construction materials with you, and space will be limited onboard the ship. Once you settle on Mars, you can use your ship as a habitat until you build your new colony. But, what are you going to create new homes from?
Cement or concrete come to mind, made from whatever’s available on or just below the surface. The authors of the study, Omid Karimzade Soureshjani, Ali Massumi, and Gholmreza Nouri, focused on Martian cement. They assembled data sets about soil composition from Mars landers and orbiters and came up with a collection of concrete types that future colonists could use. Next, they applied structural engineering principles and suggested some options for onsite construction using what are called spider/radar diagrams and charts. These allow building planners to apply data for different concepts of Mars architecture.
A graph showing steps in the study of possible building materials on Mars. Courtesy: Soureshjani, et al. Click to enlarge. Building That Mars CityThe authors, like most of us, foresee permanent settlements in the next decades. They write, “The goal would be to establish a self-sustaining city (self-sufficient megabase) on the surface of Mars, accommodating at least a million people. However, constructing safe, stable, and sufficient buildings that can withstand the harsh Martian environment for such a population will be challenging. Due to the high costs associated with importing buildings, materials, and structural elements from Earth, it is necessary to construct all buildings on-site using local resources.”
Let’s look at the usability and cost-effectiveness of Martian soil (regolith). Chemically, it’s rich in the right amounts of elements to make different types of concrete. Of course, not all the regoliths are equally useful, so they propose surface scans to find the best surface materials mixes. Presumably, those scans will help future inhabitants find the best collections. Access to those raw materials from around the planet should make them cost-effective, eventually.
Challenges to Mars ConstructionOf course, there are other factors besides material availability at work in such a construction project. Here on Earth, we have centuries of experience building in this gravity well, with familiar materials. We know how to build things under this atmospheric pressure, and we don’t have to contend with the harsh conditions of a planet constantly bombarded by ultraviolet radiation. Mars presents the challenge of creating buildings that have to withstand that radiation, the lower atmospheric pressure, and water scarcity. That lower pressure and gravity on Mars could seriously affect the durability of a given concrete made from Martian materials.
In addition to planetary geology and surface conditions, it takes energy to collect, process, and create the building materials needed for long-term habitation. You need a simple, cost-effective energy source—particularly in the beginning. It’s not likely that nuclear power plants will be first on the list to build. Those require a tremendous number of resources. Perhaps later they can be built, but not in the first wave. Solar energy is going to be the “go-to” resource in the beginning. In addition, to make cement, you need water. And, water is a notably scarce resource on much of Mars, except at the poles. They could provide some water from the ice caps, but you’ll likely want to figure out a way to make good cement with the least amount of water.
Using Organic Binders for Mars Home Building BlocksInterestingly, the authors mention something called “blood concrete”, or its modern version: AstroCrete. It’s a concept based on ancient Roman practices of using organic additives to construction materials (think: animal blood, urine, etc.). Now, they aren’t suggesting that future Martians must “bleed for their art” but our bodies do make plasma rather easily. It could be a useful resource.
A substance called “human serum albumin” (HAS) is under study as a binder to mix with “AstroCrete” materials, along with sweat, tears, and urine. All those will be available in relative abundance in future Mars settlements. The AstroCrete made from Martian soils and human “contributions” is a strong building material you can rely on for strength (and you hope it won’t smell too bad). Essentially, AstroCrete is waterless cement.
Visible light images of the 3D-printed HSA-ERB based on Martian Global Simulant. (a) after fabrication, (b) during compression testing, and (c) after compression testing. Courtesy: Robertsad, et al. Exploring the PossibilitiesThe authors studied 11 types of cement, including geopolymer and magnesium silica mixtures, all of which require specific materials. They point out that sulfur concrete is probably going to be the most promising avenue for structures on Mars. Others will take more study and implementation to understand their usability in Martian conditions. In the long term, searching out and understanding the materials available on the Red Planet will help future colonists build the necessary habitats and cities. Finally, the authors point out that additional study of both materials and the Martian environment using data from current and future missions is necessary. Their paper is well worth reading in more detail.
For More InformationMartian Buildings: Feasible Cement/concrete for Onsite Sustainable Construction from the Structural Point of View
Martian Concrete Could be Tough Stuff
Blood, Sweat, and Tears: Extraterrestrial Regolith Biocomposites with in vivo Binders
The post Building Concrete on Mars From Local Materials appeared first on Universe Today.
New Research Indicates the Sun may be More Prone to Flares Than we Thought
This past year saw some significant solar activity. This was especially true during the month of May, which saw more than 350 solar storms, solar flares, and geomagnetic storms. This included the strongest solar storm in 20 years that produced aurorae at far lower latitudes than usual and the strongest solar flare observed since December 2019. Given the threat they pose to radio communications, power grids, navigation systems, and spacecraft and astronauts, numerous agencies actively monitor the Sun’s behavior to learn more about its long-term behavior.
However, astronomers have not yet determined whether the Sun can produce “superflares” or how often they might occur. While tree rings and samples of millennia-old glacial ice are effective at records of the most powerful superflares, they are not effective ways to determine their frequency, and direct measurements of solar activity have only been available since the Space Age. In a recent study, an international team of researchers adopted a new approach. By analyzing Kepler data on tens of thousands of Sun-like stars, they estimate that stars like ours produce superflares about once a century.
The study was conducted by reseMax-Planck-Institut for Solar System Research (MPS), the Sodankylä Geophysical Observatory (SGO) and the Space Physics and Astronomy Research unit at the University of Oulu, the National Astronomical Observatory of Japan (NAOJ), the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder (UCF), the National Solar Observatory (NSO), the Commissariat of Atomic and Alternative Energies of Paris-Saclay and the University of Paris-Cité, and multiple universities. The paper that addresses their research recently appeared in the journal Science.
Superflares are notable for the intense amount of radiation they emit, about 1032 erg, or 6.2444 electron volts (eV). For comparison, consider the Carrington Event of 1859, one of the most violent solar storms of the past 200 years. While this solar flare caused widespread disruption, leading to the collapse of telegraph networks in northern Europe and North America, it released only a hundredth of the energy of a superflare. While tree rings and glacial samples have recorded powerful events in the past, the ability to observe thousands of stars at a time is teaching astronomers a lot about how often the most powerful flares occur.
This is certainly true of the Kepler Space Telescope, which monitored about 100,000 main-sequence stars continuously for years for signs of periodic dips indicating the presence of exoplanets. These same observations recorded countless solar flares, which appeared in the observational data as short, pronounced peaks in brightness. As Prof. Dr. Sami Solanki, a Director at the MPS and a co-author of the paper, explained in a MPS press release:
“We cannot observe the Sun over thousands of years. Instead, however, we can monitor the behavior of thousands of stars very similar to the Sun over short periods of time. This helps us to estimate how frequently superflares occur.”
For their study, the team analyzed data obtained by Kepler from 56,450 Sun-like stars between 2009 and 2013. This consisted of carefully analyzing the images for signs of potential superflares, which were only a few pixels in size. The team was also careful in their selection of stars, taking into account only those whose surface temperature and brightness were similar to the Sun’s. The researchers also ruled out potential sources of error, including cosmic radiation, transient phenomena (asteroids or comets), and other types of stars flaring up near a Sun-like star.
In total, the Kepler data provided the team with evidence of 220,000 years of stellar activity. From this, they were able to identify 2,889 superflares from 2,527 of the observed stars, producing an average of one superflare per star per century. While previous surveys have found average intervals of a thousand or even ten thousand years, these studies could not determine the exact source of the observed flares. They also had to limit themselves to stars without any close neighbors, making this latest study the most precise and sensitive to date.
Nevertheless, previous studies that considered indirect evidence and observations made in the past few decades have yielded longer intervals between superflares. Whenever the Sun has released a high level of energetic particles that reached Earth’s atmosphere in the past, the interaction produced a detectable amount of radioactive carbon-14 (C14). This isotope will remain in tree and glacial samples over thousands of years of slow decay, allowing astronomers to identify powerful solar events and how long ago they occurred.
This method has allowed researchers to identify five extreme solar particle events and three candidates within the past twelve thousand years – suggesting an average rate of one superflare per 1,500 years. However, the team acknowledges that it is possible that more violent solar particle events and superflares occurred in the past. “It is unclear whether gigantic flares are always accompanied by coronal mass ejections and what is the relationship between superflares and extreme solar particle events,” said co-author Prof. Dr. Ilya Usoskin from the University of Oulu. “This requires further investigation.”
While the new study does not reveal when the Sun will experience its next superflare, the results urge caution. “The new data are a stark reminder that even the most extreme solar events are part of the Sun’s natural repertoire,” said co-author Dr. Natalie Krivova from the MPS. In the meantime, the best way to stay prepared is to monitor the Sun regularly to ensure reliable forecasting and advanced warning. By 2031, these efforts will be bolstered by the ESA’s Vigil probe, which the MPS is assisting through the development of its Polarimetric and Magnetic Imager (PHI) instrument.
The post New Research Indicates the Sun may be More Prone to Flares Than we Thought appeared first on Universe Today.