Once you can accept the Universe as matter expanding into nothing that is something, wearing stripes with plaid comes easy.

— Albert Einstein

Universe Today

Syndicate content Universe Today
Space and astronomy news
Updated: 56 min 37 sec ago

A Space Walking Robot Could Build a Giant Telescope in Space

Wed, 11/06/2024 - 1:28pm

The Hubble Space Telescope was carried to space inside the space shuttle Discovery and then released into low-Earth orbit. The James Webb Space Telescope was squeezed inside the nose cone of an Ariane 5 rocket and then launched. It deployed its mirror and shade on its way to its home at the Sun-Earth L2 Lagrange point.

However, the ISS was assembled in space with components launched at different times. Could it be a model for building future space telescopes and other space facilities?

The Universe has a lot of dark corners that need to be peered into. That’s why we’re driven to build more powerful telescopes, which means larger mirrors. However, it becomes increasingly difficult to launch them into space inside rocket nose cones. Since we don’t have space shuttles anymore, this leads us to a natural conclusion: assemble our space telescopes in space using powerful robots.

New research in the journal Acta Astronautica examines the viability of using walking robots to build space telescopes.

The research is “The new era of walking manipulators in space: Feasibility and operational assessment of assembling a 25 m Large Aperture Space Telescope in orbit.” The lead author is Manu Nair from the Lincoln Centre for Autonomous Systems in the UK.

“This research is timely given the constant clamour for high-resolution astronomy and Earth observation within the space community and serves as a baseline for future missions with telescopes of much larger aperture, missions requiring assembly of space stations, and solar-power generation satellites, to list a few,” the authors write.

While the Canadarm and the European Robotic Arm on the ISS have proven capable and effective, they have limitations. They’re remotely operated by astronauts and have only limited walking abilities.

Recognizing the need for more capable space telescopes, space stations, and other infrastructure, Nair and his co-authors are developing a concept for an improved walking robot. “To address the limitations of conventional walking manipulators, this paper presents a novel seven-degrees-of-freedom dexterous End-Over-End Walking Robot (E-Walker) for future In-Space Assembly and Manufacturing (ISAM) missions,” they write.

An illustration of the E-walker. The robot has seven degrees of freedom, meaning it has seven independent motions. Image Credit: Mini Rai, University of Lincoln.

Robotics, Automation, and Autonomous Systems (RAAS) will play a big role in the future of space telescopes and other infrastructure. These systems require dexterity, a high degree of autonomy, redundancy, and modularity. A lot of work remains to create RAAS that can operate in the harsh environment of space. The E-Walker is a concept that aims to fulfill some of these requirements.

The authors point out how robots are being used in unique industrial settings here on Earth. The Joint European Torus is being decommissioned, and a Boston Dynamics Spot quadruped robot is being used to test its effectiveness. It moved around the JET autonomously during a 35-day trial, mapping the facility and taking sensor readings, all while avoiding obstacles and personnel.

The Boston Dynamics Spot robot spent 35 days working autonomously on the Joint European Torus. Here, Spot is inspecting wires and pipes at the facility at Culham, near Oxford (Image Credit: UKAEA)

Using Spot during an industrial shutdown shows the potential of autonomous robots. However, robots still have a long way to go before they can build a space telescope. The authors’ case study could be an important initial step.

Their case study is the hypothetical LAST, a Large Aperture Space Telescope with a wide-field, 25-meter primary mirror that operates in visible light. LAST is the backdrop for the researchers’ feasibility study.

LAST’s primary mirror would be modular, and its piece would have connector ports and interfaces for construction and for data, power, and thermal transfer. This type of modularity would make it easier for autonomous systems to assemble the telescope.

LAST would build its mirror using Primary Mirror Units (PMUs). Nineteen PMUs make up a Primary Mirror Segment (PMS), and 18 PMSs would constitute LAST’s 25-meter primary mirror. A total of 342 PMUs would be needed to complete the telescope.

This figure shows how LAST would be constructed. 342 Primary Mirror Units make up the 18 Primary Mirror Segments, adding up to a 25-meter primary mirror. (b) shows how the center of each PMU is found, and (c) shows a PMU and its connectors. Image Credit: Nair et al. 2024.

The E-Walker concept would also have two spacecraft: a Base Spacecraft (BSC) and a Storage Spacecraft (SSC). The BSC would act as a kind of mothership, sending required commands to the E-Walker, monitoring its operational state, and ensuring that things go smoothly. The SSC would hold all of the PMUs in a stacked arrangement, and the E-Walker would retrieve one at a time.

The researchers developed eleven different Concept of Operations (ConOps) for the LAST mission. Some of the ConOps included multiple E-walkers working cooperatively. The goals are to optimize task-sharing, prioritize ground-lifting mass, and simplify control and motion planning. “The above-mentioned eleven mission scenarios are studied further to choose the most feasible ConOps for the assembly of the 25m LAST,” they explain.

This figure summarizes the 11 mission ConOps developed for LAST. (a) shows assembly with a single E-walker, (b) shows partially shared responsibilities among the E-walkers, (c) shows equally shared responsibilities between E-walkers, and (d) shows assembly carried out in two separate units, which is the safer assembly option. Image Credit: Nair et al. 2024.

Advanced tools like robotics and AI will be mainstays in the future of space exploration. It’s almost impossible to imagine a future where they aren’t critical, especially as our goals become more complex. “The capability to assemble complex systems in orbit using one or more robots will be an absolute requirement for supporting a resilient future orbital ecosystem,” the authors write. “In the forthcoming decades, newer infrastructures in the Earth’s orbits, which are much more advanced than the International Space Station, are needed for in-orbit servicing, manufacturing, recycling, orbital warehouse, Space-based Solar Power (SBSP), and astronomical and Earth-observational stations.”

The authors point out that their work is based on some assumptions and theoretical models. The E-walker concept still needs a lot of work, but a prototype is being developed.

It’s likely that the E-walker or some similar system will eventually be used to build telescopes, space stations, and other infrastructure.

The post A Space Walking Robot Could Build a Giant Telescope in Space appeared first on Universe Today.

Categories: Astronomy

New Report Details What Happened to the Arecibo Observatory

Tue, 11/05/2024 - 8:05pm

In 1963, the Arecibo Observatory became operational on the island of Puerto Rico. Measuring 305 meters (~1000 ft) in diameter, Arecibo’s spherical reflector dish was the largest radio telescope in the world at the time – a record it maintained until 2016 with the construction of the Five-hundred-meter Aperture Spherical Telescope (FAST) in China. In December 2020, Arecibo’s reflector dish collapsed after some of its support cables snapped, leading the National Science Foundation (NSF) to decommission the Observatory.

Shortly thereafter, the NSF and the University of Central Florida launched investigations to determine what caused the collapse. After nearly four years, the Committee on Analysis of Causes of Failure and Collapse of the 305-Meter Telescope at the Arecibo Observatory released an official report that details their findings. According to the report, the collapse was due to weakened infrastructure caused by long-term zinc creep-induced failure in the telescope’s cable sockets and previous damage caused by Hurricane Maria.

The massive dish was originally called the Arecibo Ionospheric Observatory and was intended for ionospheric research in addition to radio astronomy. The former task was part of the Advance Research Projects Agency’s (ARPA) Defender Program, which aimed to develop ballistic missile defenses. By 1967, the NSF took over the administration of Arecibo, henceforth making it a civilian facility dedicated to astronomy research. By 1971, NASA signed a memo of understanding to share the costs of maintaining and upgrading the facility.

Radar images of 1991 VH and its satellite by Arecibo Observatory in 2008. Credit: NSF

During its many years of service, the Arecibo Observatory accomplished some amazing feats. This included the first-ever discovery of a binary pulsar in 1974, which led to the discovery team (Russell A. Hulse and Joseph H. Taylor) being awarded the Nobel Prize in physics in 1993. In 1985, the observatory discovered the binary asteroid 4337 Arecibo in the outer regions of the Main Asteroid Belt. In 1992, Arecibo discovered the first exoplanets, two rocky bodies roughly four times the mass of Earth around the pulsar PSR 1257+12. This was followed by the discovery of the first repeating Fast Radio Burst (FRB) in 2016.

The observatory was also responsible for sending the famous Arecibo Message, the most powerful broadcast ever beamed into space and humanity’s first true attempt at Messaging Extraterrestrial Intelligence (METI). The pictorial message, which was crafted by a group of Cornell University and Arecibo scientists, which included Frank Drake (creator of the Drake equation), famed science communicator and author Carl Sagan, Richard Isaacman, Linda May, and James C.G. Walker, was aimed at the globular star cluster M13.

According to the Committee report, the structural failure began in 2017 when Hurricane Maria hit the Observatory on September 20th, 2017:

“Maria subjected the Arecibo Telescope to winds of between 105 and 118 mph, with the source of this uncertainty in wind speed discussed below... Based on a review of available records, the winds of Hurricane Maria subjected the Arecibo Telescope’s cables to the highest structural stress they had ever endured since it opened in 1963.”

However, inspections conducted after the hurricane concluded that “no significant damage had jeopardized the Arecibo Telescope’s structural integrity.” Repairs were nonetheless ordered, but the report identified several issues that caused these repairs to be delayed for years. Even so, the investigation indicated that due to the misdirection of repairs “toward components and replacement of a main cable that ultimately never failed,” these would not have prevented the Observatory’s collapse regardless.

Aerial view of the damage to the Arecibo Observatory following the collapse of the of the telescope platform on December 1st, 2020. Credit: Deborah Martorell

Moreover, in August and November of 2020, an auxiliary and main cable suffered a structural failure, which led to the NSF announcing that they would decommission the telescope through a controlled demolition to avoid a catastrophic collapse. They also stated that the other facilities in the observatory would remain operational, such as the Ángel Ramos Foundation Visitor Center. Before that could occur, however, more support cables buckled on December 1st, 2020, causing the instrument platform to collapse into the dish.

This collapse also removed the tops of the support towers and partially damaged some of the Observatory’s other buildings. Mercifully, no one was hurt. According to the report, the Arecibo Telescope’s cable spelter sockets had degraded considerably, as indicated by the previous cable failures. They also explain that the collapse was triggered by “hidden outer wire failures,” which had already fractured due to shear stress from zinc creep (aka. zinc decay) in the telescope’s cable spelter sockets.

This issue was not identified in the post-Mariah inspection, leading to a lack of consideration of degradation mechanisms and an overestimation of the potential strength of the other cables. According to NSF statements issued in October 2022 and September 2023, the observatory will be remade into an education center known as Arecibo C3, focused on Ciencia (Science), Computación (Computing), and fostering Comunidad (Community). So, while the observatory’s long history of radio astronomy may have ended, it will carry on as a STEM research center, and its legacy will endure.

Further Reading: National Academies Press, Gizmodo

The post New Report Details What Happened to the Arecibo Observatory appeared first on Universe Today.

Categories: Astronomy

We Understand Rotating Black Holes Even Less Than We Thought

Tue, 11/05/2024 - 2:44pm

Black holes are real. We see them throughout the cosmos, and have even directly imaged the supermassive black hole in M87 and our own Milky Way. We understand black holes quite well, but the theoretical descriptions of these cosmic creatures still have nagging issues. Perhaps the most famous issue is that of the singularity. According to the classical model of general relativity, all the matter that forms a black hole must be compressed into an infinite density, enclosed within a sphere of zero volume. We assume that somehow quantum physics will avert this problem, though without a theory of quantum gravity, we aren’t sure how. But the singularity isn’t the only infinite problem. Take, for example, the strange boundary known as the Cauchy horizon.

When you get down to it, general relativity is a set of complex differential equations. To understand black holes, you must solve these equations subject to a set of conditions such as the amount of mass, rotation, and electromagnetic charge. The equations are so complex that physicists often focus on connecting solutions to certain mathematical boundaries, or horizons. For example, the event horizon is a boundary between the inside and outside of a black hole. It’s one of the easier horizons to explain because if you happen to cross the event horizon of a black hole, you are forever trapped within it. The event horizon is like a cosmic Hotel California.

For a simple, non-rotating black hole, the event horizon is the only one that really matters. But for rotating black holes, things get really weird. To begin with, the singularity becomes a ring, not a point. And rather than a single event horizon, there is an outer and an inner horizon. The outer one still acts as an event horizon, forever trapping what dares to cross its boundary. The inner one is what’s often called the Cauchy horizon. If you cross the inner horizon, you are still trapped within, but you aren’t necessarily doomed to fall ever closer toward the singularity. Within the Cauchy horizon, spacetime can behave somewhat normally, though it is bounded.

Horizon structure for a rotating black hole. Credit: Simon Tyran, via Wikipedia

The Cauchy horizon can cause all sorts of strange things, but one of them is that the horizon is unstable. If you try to determine perturbations of the horizon, the calculated mass within the horizon diverges, an effect known as mass inflation. It’s somewhat similar to the way the singularity approaches infinite density in the classical model. While this is frustrating, physicists can sweep it under the rug by invoking the principle of cosmic censorship. It basically says that as long as some basic conditions hold, all the strange behaviors like singularities and mass inflation are always bounded by an event horizon. There may be an infinity of mathematical demons in a black hole, but they can never escape, so we don’t really need to worry about them.

But a new paper may have handed those demons a key. The paper shows that mass inflation can occur even without a Cauchy horizon. Without an explicit Cauchy horizon, those basic conditions for cosmic censorship don’t necessarily apply. This suggests that the black hole solutions we get from general relativity are flawed. They can describe black holes that exist for a limited time, but not the long-lasting black holes that actually exist.

What this means isn’t entirely clear. It might be that this impermanent loophole is just general relativity’s way of pointing us toward a quantum theory of gravity. After all, if Hawking radiation is real, all black holes are impermanent and eventually evaporate. But the result could also suggest that general relativity is only partially correct, and what we need is an extension of Einstein’s model the way GR extended Newtonian gravity. What is clear is that our understanding of black holes is incomplete.

Reference: Carballo-Rubio, Raúl, et al. “Mass inflation without Cauchy horizons.” Physical Review Letters 133.18 (2024): 181402.

The post We Understand Rotating Black Holes Even Less Than We Thought appeared first on Universe Today.

Categories: Astronomy

Habitable Worlds are Found in Safe Places

Tue, 11/05/2024 - 2:30pm

When we think of exoplanets that may be able to support life, we hone in on the habitable zone. A habitable zone is a region around a star where planets receive enough stellar energy to have liquid surface water. It’s a somewhat crude but helpful first step when examining thousands of exoplanets.

However, there’s a lot more to habitability than that.

In a dense stellar environment, planets in habitable zones have more than their host star to contend with. Stellar flybys and exploding supernovae can eject habitable zone exoplanets from their solar systems and even destroy their atmospheres or the planets themselves.

New research examines the threats facing the habitable zone planets in our stellar neighbourhood. The study is “The 10 pc Neighborhood of Habitable Zone Exoplanetary Systems: Threat Assessment from Stellar Encounters & Supernovae,” and it has been accepted for publication in The Astronomical Journal. The lead author is Tisyagupta Pyne from the Integrated Science Education And Research Centre at Visva-Bharati University in India.

The researchers examined the 10-parsec regions around the 84 solar systems with habitable zone exoplanets. Some of these Habitable Zone Systems (HZS) face risks from stars outside of the solar systems. How do these risks affect their habitability? What does it mean for our notion of the habitable zone?

“Among the 4,500+ exoplanet-hosting stars, about 140+ are known to host planets in their habitable zones,” the authors write. “We assess the possible risks that local stellar environment of these HZS pose to their habitability.”

This image from the research shows the sky positions of exoplanet-hosting stars projected on a Molleweide map. HZS are denoted by yellow-green circles, while the remaining population of exoplanets is represented by gray circles. The studied sample of 84 HZS, located within 220 pc of the Sun, is represented by crossed yellow-green circles. The three high-density HZS located near the galactic plane are labeled 1, 2 and 3 in white. The colour bar represents the stellar density, i.e., the number of stars having more than 15 stars within a radius of 5 arc mins. Image Credit: Pyne et al. 2024.

We have more than 150 confirmed exoplanets in habitable zones, and as exoplanet science advances, scientists are developing a more detailed understanding of what habitable zone means. Scientists increasingly use the terms conservative habitable zone and optimistic habitable zone.

The optimistic habitable zone is defined as regions that receive less radiation from their star than Venus received one billion years ago and more than Mars did 3.8 billion years ago. Scientists think that recent Venus (RV) and early Mars (EM) both likely had surface water.

The conservative habitable zone is a more stringent definition. It’s a narrower region around a star where an exoplanet could have surface water. It’s defined by an inner runaway greenhouse edge where stellar flux would vaporize surface water and an outer maximum greenhouse edge where the greenhouse effect of carbon dioxide is dominated by Rayleigh scattering.

Those are useful scientific definitions as far as they go. But what about habitable stellar environments? In recent years, scientists have learned a lot about how stars behave, the characteristics of exoplanets, and how the two are intertwined.

“The discovery of numerous extrasolar planets has revealed a diverse array of stellar and planetary characteristics, making systematic comparisons crucial for evaluating habitability and assessing the potential for life beyond our solar system,” the authors write.

To make these necessary systematic comparisons, the researchers developed two metrics: the Solar Similarity Index (SSI) and the Neighborhood Similarity Index (NSI). Since main sequence stars like our Sun are conducive to habitability, the SSI compares our Solar System’s properties with those of other HZs. The NSI compares the properties of stars in a 10-parsec region around the Sun to the same size region around other HZSs.

This research is mostly based on data from the ESA’s Gaia spacecraft, which is building a map of the Milky Way by measuring one billion stars. But the further away an HZS is, or the dimmer the stars are, the more likely Gaia may not have detected every star, which affects the research’s results. This image shows Gaia’s data completeness. The colour scale indicates the faintest G magnitude at which the 95% completeness threshold is achieved. “Our sample of 84 HZS (green circles) has been overlaid on the map to visually depict the completeness of their respective neighbourhoods,” the authors write. Image Credit: Pyne et al. 2024.

These indices put habitable zones in a larger context.

“While the concept of HZ is vital in the search for habitable worlds, the stellar environment of the planet also plays an important role in determining longevity and maintenance of habitability,” the authors write. “Studies have shown that a high rate of catastrophic events, such as supernovae and close stellar encounters in regions of high stellar density, is not conducive to the evolution of complex life forms and the maintenance of habitability over long periods.”

When radiation and high-energy particles from a distant source reach a planet in a habitable zone, they can cause severe damage to Earth-like planets. Supernovae are a dangerous source of radiation and particles, and if one were to explode close enough to Earth, that would be the end of life. Scientists know that ancient supernovae have left their mark on Earth, but none of them were close enough to destroy the atmosphere.

“Our primary focus is to investigate the effects of SNe on the atmospheres of exoplanets or exomoons assuming their atmospheres to be Earth-like,” the authors write.

The first factor is stellar density. The more stars in a neighbourhood, the greater the likelihood of supernova explosions and stellar flybys.

“The astrophysical impacts of the stellar environment is a “low-probability, high-consequence” scenario
for the continuation of habitability of exoplanets,” the authors write. Though disruptive events like supernova explosions or close stellar flybys are unlikely, the consequences can be so severe that habitability is completely eliminated.

When it came to the supernova threat, the researchers looked at high-mass stars in stellar neighbourhoods since only massive stars explode. Pyne and her colleagues found high-mass stars with more than eight solar masses in the 10-parsec neighbourhoods of two HZS: TOI-1227 and HD 48265. “These high-mass stars are potential progenitors for supernova explosions,” the authors explain.

Only one of the HZS is at risk of a stellar flyby. HD 165155 has an encounter rate of ?1 in 5 Gyr period. That means it’s at greater risk of an encounter with another star that could eject planets from its habitable zone.

The team’s pair of indices, the SSI and the NSI, produced divergent results. “… we find that the stellar environments of the majority of HZS exhibit a high degree of similarity (NSI> 0.75) to the solar neighbourhood,” they explain. However, because of the wide variety of stars in HZS, comparing them to the Sun results in a wide range of SSI values.

We know the danger supernova explosions pose to habitability. The initial burst of radiation could kill anything on the surface of a planet too close. The ongoing radiation could strip away the atmospheres of some planets further away and could also cause DNA damage in any lifeforms exposed to it. For planets that are further away from the blast, the supernova could alter their climate and trigger extinctions. There’s no absolutely certain understanding of how far away a planet needs to avoid devastation, but many scientists say that within 50 light-years, a planet is probably toast.

We can see the results of some of the stellar flybys the authors are considering. Rogue planets, or free-floating planets (FPPs), are likely in their hapless situations precisely because a stellar interloper got too close to their HZS and disrupted the gravitational relationships between the planets and their stars. We don’t know how many of these FPPs are in the Milky Way, but there could be many billions of them. Future telescopes like the Nancy Grace Roman Space Telescope will help us understand how many there truly are.

An artist’s illustration of a rogue planet, dark and mysterious. Image Credit: NASA

Habitability may be fleeting, and our planet may be the exception. It’s possible that life appears on many planets in habitable zones but can’t last long due to various factors. From a great distance away, we can’t detect all the variables that go into exoplanet habitability.

However, we can gain an understanding of the stellar environments in which potentially habitable exoplanets exist, and this research shows us how.

The post Habitable Worlds are Found in Safe Places appeared first on Universe Today.

Categories: Astronomy

New Glenn Booster Moves to Launch Complex 36

Tue, 11/05/2024 - 2:21pm

Nine years ago, Blue Origin revealed the plans for their New Glenn rocket, a heavy-lift vehicle with a reusable first stage that would compete with SpaceX for orbital flights. Since that time, SpaceX has launched hundreds of rockets, while Blue Origin has been working mostly in secret on New Glenn. Last week, the company rolled out the first prototype of the first-stage booster to the launch complex at Cape Canaveral Space Force Station. If all goes well, we could see a late November test on the launch pad.

The test will be an integrated launch vehicle hot-fire which will include the second stage and a stacked payload.

Images posted on social media by Blue Origin CEO Dave Limp showed the 57-meter (188-foot)-long first stage with its seven BE-4 engines as it was transported from the production facility in Merritt Island, Florida — next to the Kennedy Space Center — to Launch Complex 36 at Cape Canaveral. Limp said that it was a 23-mile, multiple-hour journey “because we have to take the long way around.” The booster was carried by Blue Origin’s trailers called GERT (Giant Enormous Rocket Truck).

#NewGlenn’s GS1 is on the move! Our transporter comprises two trailers connected by cradles and a strongback assembly designed in-house. There are 22 axles and 176 tires on this transport vehicle. It’s towed by an Oshkosh M1070, a repurposed U.S. Army tank transporter, with 505… pic.twitter.com/4Qq7Ofq2g2

— Dave Limp (@davill) October 30, 2024

“Our transporter comprises two trailers connected by cradles and a strongback assembly designed in-house,” said Limp on X. “There are 22 axles and 176 tires on this transport vehicle…The distance between GERT’s front bumper and the trailer’s rear is 310’, about the length of a football field.”

Limp said the next step is to put the first and second stages together on the launch pad for the fully integrated hot fire dress rehearsal. The second stage recently completed its own hot fire at the launch site.

An overhead view of the New Glenn booster heading to launch complex 36 at Cape Canaveral during the night of Oct. 30, 2024. Credit: Blue Origin/Dave Limp.

Hopefully the test will lead to Blue Origin’s first ever launch to orbit. While the New Glenn rocket has had its share of delays, it seems Blue Origin has also taken a slow, measured approach to prepare for its first launch. In February of this year, a boilerplate of the rocket was finally rolled onto the launch pad at Cape Canaveral for testing.  Then in May 2024, New Glenn was rolled out again for additional testing. Now, the fully integrated test in the next few weeks will perhaps lead to a launch by the end of the year.

New Glenn’s seven engines will give it more than 3.8 million pounds of thrust on liftoff. The goal is for New Glenn to reuse its first-stage booster and the seven engines powering it, with recovery on a barge located downrange off the coast of Florida in the Atlantic Ocean.

New Glenn boosters are designed for 25 flights.

Blue Origin says New Glenn will launch payloads into high-energy orbits. It can carry more than 13 metric tons to geostationary transfer orbit (GTO) and 45 metric tons to low Earth orbit (LEO).

For the first flight, Blue Origin will be flying its own hardware as a payload, a satellite deployment technology called Blue Ring. Even though it doesn’t have a paying customer for the upcoming launch, it would be — if successful — the first of two required certification flights needed by the rocket by the U.S. Space Force so it could potentially be awarded future national security missions along with side SpaceX and United Launch Alliance (ULA.)

Additional details can be found at PhysOrg and NASASpaceflight.com.

The post New Glenn Booster Moves to Launch Complex 36 appeared first on Universe Today.

Categories: Astronomy

How Many Additional Exoplanets are in Known Systems?

Tue, 11/05/2024 - 10:05am

One thing we’ve learned in recent decades is that exoplanets are surprisingly common. So far, we’ve confirmed nearly 6,000 planets, and we have evidence for thousands more. Most of these planets were discovered using the transit method. though we there are other methods as well. Many stars are known to have multiple planets, such as the TRAPPIST-1 system with seven Earth-sized worlds. But even within known planetary systems there could be planets we’ve overlooked. Perhaps their orbit doesn’t pass in front of the star from our vantage point, or the evidence of their presence is buried in data noise. How might we find them? A recent paper on the arXiv has an interesting approach.

Rather than combing through the observational data trying to extract more planets from the noise, the authors suggest that we look at the orbital dynamics of known systems to see if planets might be possible between the planets we know. Established systems are millions or billions of years old, so their planetary orbits must be stable on those timescales. If the planets of a system are “closely packed,” then adding new planets to the mix would cause the system to go all akilter. If the system is “loosely packed,” then we could add hypothetical planets between the others, and the system would still be dynamically stable.

The seven planetary systems considered. Credit: Horner, et al

To show how this would work, the authors consider seven planetary systems discovered by the Transiting Exoplanet Survey Satellite (TESS) known to have two planets. Since it isn’t likely that a system has only two planets, there is a good chance they have others. The team then ran thousands of simulations of these systems with hypothetical planets, calculating if they could remain stable over millions of years. They found that for two of the systems, extra planets (other than planets much more distant than the known ones) could be ruled out on dynamical grounds. Extra planets would almost certainly destabilize the systems. But five of the systems could remain stable with more planets. That doesn’t mean those systems have more planets, only that they could.

One of the things this work shows is that most of the currently known exoplanetary systems likely have yet-undiscovered worlds. This approach could also help us sort systems to determine which ones might deserve a further look. We are still in the early stages of discovery, and we are gathering data with incredible speed. We need tools like this so we aren’t overwhelmed by piles of new data.

Reference: Horner, Jonathan, et al. “The Search for the Inbetweeners: How packed are TESS planetary systems?arXiv preprint arXiv:2411.00245 (2024).

The post How Many Additional Exoplanets are in Known Systems? appeared first on Universe Today.

Categories: Astronomy

Hubble and Webb are the Dream Team. Don't Break Them Up

Tue, 11/05/2024 - 3:16am

Many people think of the James Webb Space Telescope as a sort of Hubble 2. They understand that the Hubble Space Telescope (HST) has served us well but is now old, and overdue for replacement. NASA seems to agree, as they have not sent a maintenance mission in over fifteen years, and are already preparing to wind down operations. But a recent paper argues that this is a mistake. Despite its age, HST still performs extremely well and continues to produce an avalanche of valuable scientific results. And given that JWST was never designed as a replacement for HST — it is an infrared (IR) telescope) — we would best be served by operating both telescopes in tandem, to maximize coverage of all observations.

Let’s not fool ourselves: the Hubble Space Telescope (HST) is old, and is eventually going to fall back to Earth. Although it was designed to be repairable and upgradable, there have been no servicing missions since 2009. Those missions relied on the Space Shuttle, which could capture the telescope and provide a working base for astronauts. Servicing missions could last weeks, and only the Space Shuttle could transport the six astronauts to the telescope and house them for the duration of the mission.

Without those servicing missions, failing components can no longer be replaced, and the overall health of HST will keep declining. If nothing is done, HST will eventually stop working altogether. To avoid it becoming just another piece of space junk, plans are already being developed to de-orbit it and send it crashing into the Pacific Ocean. But that’s no reason to give up on it. It still has as clear a view of the cosmos as ever, and mission scientists are doing an excellent job of working around technical problems as they arise.

The James Webb Space Telescope was launched into space on Christmas dat in 2021. Its system of foldable hexagonal mirrors give it an effective diameter some 2.7 times larger than HST, and it is designed to see down into the mid-IR range. Within months of deployment, it had already seen things that clashed with existing models of how the Universe formed, creating a mini-crisis in some fields and leading unscrupulous news editors to write headlines questioning whether the “Big Bang Theory” was under threat!

This image of NASA’s Hubble Space Telescope was taken on May 19, 2009 after deployment during Servicing Mission 4. NASA

The reason JWST was able to capture such ancient galaxies is that it is primarily an IR telescope: As the Universe expands, photons from distant objects get red-shifted until stars that originally shone in visible light can now only be seen in the IR. But these IR views are proving extremely valuable in other scientific fields apart from cosmology. In fact, many of the most striking images released by JWST’s press team are IR images of familiar objects, revealing hidden complexities that had not been seen before.

This is a key difference between the two telescopes: While HST’s range overlaps slightly with JWST, it can see all the way up into ultraviolet (UV) wavelengths. HST was launched in 1990, seven years late and billions of dollars over budget. Its 2.4 meter primary element needed to be one of the most precisely ground mirrors ever made, because it was intended to be diffraction limited at UV wavelengths. Famously, avoidable problems in the testing process led to it being very precisely figured to a slightly wrong shape, leading to spherical aberration preventing it from coming to sharp focus.

Fortunately the telescope was designed from the start to be serviceable, and even returned to Earth for repairs by the Space Shuttle if necessary. In the end though, NASA opticians were able to design and build a set of corrective optics to solve the problem, and the COSTAR system was installed by astronauts on the first servicing mission. Over the years, NASA sent up three more servicing missions, to upgrade or repair components, and install new instruments.

Illustration of NASA’s James Webb Space Telescope. Credits: NASA

HST could be one of the most successful scientific instruments ever built. Since 1990, it has been the subject of approximately 1200 science press releases, each of which was read more than 400 million times. The more than 46,000 scientific papers written using HST data have been cited more than 900,000 times! And even in its current degraded state, it still provided data for 1435 papers in 2023 alone.

JWST also ran over time and over budget, but had a far more successful deployment. Despite having a much larger mirror, with more than six times the collecting area of HST, the entire observatory only weighs half as much as HST. Because of its greater sensitivity, and the fact that it can see ancient light redshifted into IR wavelengths, it can see far deeper into the Universe than HST. It is these observations, of galaxies formed when the Universe was extremely young (100 – 180 million years), that created such excitement shortly after it was deployed.

As valuable as these telescopes are, they will not last forever. JWST is located deep in space, some 1.5 million kilometers from Earth near the L2 Lagrange point. When it eventually fails, it will become just another piece of Solar System debris orbiting the Sun in the vast emptiness of the Solar System. HST, however, is in Low Earth Orbit (LEO), and suffers very slight amounts of drag from the faint outer reaches of the atmosphere. Over time it will gradually lose speed, drifting downwards until it enters the atmosphere proper and crashes to Earth. Because of its size, it will not burn up completely, and large chunks will smash into the surface.

Because it cannot be predicted where exactly it will re-enter, mission planners always intended to capture it with the Space Shuttle and return it to Earth before this happened. Its final resting place was supposed to be in display in a museum, but unfortunately the shuttle program was cancelled. The current plan is to send up an uncrewed rocket which will dock with the telescope (a special attachment was installed on the final servicing mission for this purpose), and deorbit it in a controlled way to ensure that its pieces land safely in the ocean.

You can find the original paper at https://arxiv.org/abs/2410.01187

The post Hubble and Webb are the Dream Team. Don't Break Them Up appeared first on Universe Today.

Categories: Astronomy

Scientists Have Figured out why Martian Soil is so Crusty

Mon, 11/04/2024 - 7:13pm

On November 26th, 2018, NASA’s Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport (InSight) mission landed on Mars. This was a major milestone in Mars exploration since it was the first time a research station had been deployed to the surface to probe the planet’s interior. One of the most important instruments InSight would use to do this was the Heat Flow and Physical Properties Package (HP3) developed by the German Aerospace Center (DLR). Also known as the Martian Mole, this instrument measured the heat flow from deep inside the planet for four years.

The HP3 was designed to dig up to five meters (~16.5 ft) into the surface to sense heat deeper in Mars’ interior. Unfortunately, the Mole struggled to burrow itself and eventually got just beneath the surface, which was a surprise to scientists. Nevertheless, the Mole gathered considerable data on the daily and seasonal fluctuations below the surface. Analysis of this data by a team from the German Aerospace Center (DLR) has yielded new insight into why Martian soil is so “crusty.” According to their findings, temperatures in the top 40 cm (~16 inches) of the Martian surface lead to the formation of salt films that harden the soil.

The analysis was conducted by a team from the Microgravity User Support Center (MUSC) of the DLR Space Operations and Astronaut Training Institution in Cologne, which is responsible for overseeing the HP3 experiment. The heat data it obtained from the interior could be integral to understanding Mars’s geological evolution and addressing theories about its core region. At present, scientists suspect that geological activity on Mars largely ceased by the late Hesperian period (ca. 3 billion years ago), though there is evidence that lava still flows there today.

The “Mars Mole,” Heat Flow and Physical Properties Package (HP³). Credit: DLR

This was likely caused by Mars’ interior cooling faster due to its lower mass and lower pressure. Scientists theorize that this caused Mars’ outer core to solidify while its inner core became liquid—though this remains an open question. By comparing the subsurface temperatures obtained by InSight to surface temperatures, the DLR team could measure the rate of heat transport in the crust (thermal diffusivity) and thermal conductivity. From this, the density of the Martian soil could be estimated for the first time.

The team determined that the density of the uppermost 30 cm (~12 inches) of soil is comparable to basaltic sand – something that was not anticipated based on orbiter data. This material is common on Earth and is created by weathering volcanic rock rich in iron and magnesium. Beneath this layer, the soil density is comparable to consolidated sand and coarser basalt fragments. Tilman Spohn, the principal investigator of the HP3 experiment at the DLR Institute of Planetary Research, explained in a DLR press release:

“To get an idea of the mechanical properties of the soil, I like to compare it to floral foam, widely used in floristry for flower arrangements. It is a lightweight, highly porous material in which holes are created when plant stems are pressed into it... Over the course of seven Martian days, we measured thermal conductivity and temperature fluctuations at short intervals.

Additionally, we continuously measured the highest and lowest daily temperatures over the second Martian year. The average temperature over the depth of the 40-centimetre-long thermal probe was minus 56 degrees Celsius (217.5 Kelvin). These records, documenting the temperature curve over daily cycles and seasonal variations, were the first of their kind on Mars.”

NASA’s In­Sight space­craft land­ed in the Ely­si­um Plani­tia re­gion on Mars on 26 Novem­ber 2018. Credit: Credit: NASA-JPL/USGS/MOLA/DLR

Because the encrusted Martian soil (aka. “duricrust”) extends to a depth of 20 cm (~8 inches), the Mole managed to penetrate just a little more than 40 cm (~16 inches) – well short of its 5 m (~16.5 ft) objective. Nevertheless, the data obtained at this depth has provided valuable insight into heat transport on Mars. Accordingly, the team found that ground temperatures fluctuated by only 5 to 7 °C (9 to 12.5 °F) during a Martian day, a tiny fraction of the fluctuations observed on the surface—110 to 130 °C (230 to 266 °F).

Seasonally, they noted temperature fluctuation of 13 °C (~23.5 °F) while remaining below the freezing point of water on Mars in the layers near the surface. This demonstrates that the Martian soil is an excellent insulator, significantly reducing the large temperature differences at shallow depths. This influences various physical properties in Martian soil, including elasticity, thermal conductivity, heat capacity, the movement of material within, and the speed at which seismic waves can pass through them.

“Temperature also has a strong influence on chemical reactions occurring in the soil, on the exchange with gas molecules in the atmosphere, and therefore also on potential biological processes regarding possible microbial life on Mars,” said Spohn. “These insights into the properties and strength of the Martian soil are also of particular interest for future human exploration of Mars.”

What was particularly interesting, though, is how the temperature fluctuations enable the formation of salty brines for ten hours a day (when there is sufficient moisture in the atmosphere) in winter and spring. Therefore, the solidification of this brine is the most likely explanation for the duricrust layer beneath the surface. This information could prove very useful as future missions explore Mars and attempt to probe beneath the surface to learn more about the Red Planet’s history.

Further Reading: DLR

The post Scientists Have Figured out why Martian Soil is so Crusty appeared first on Universe Today.

Categories: Astronomy

Another Way to Extract Energy From Black Holes?

Mon, 11/04/2024 - 1:40pm

The gravitational field of a rotating black hole is powerful and strange. It is so powerful that it warps space and time back upon itself, and it is so strange that even simple concepts such as motion and rotation are turned on their heads. Understanding how these concepts play out is challenging, but they help astronomers understand how black holes generate such tremendous energy. Take, for example, the concept of frame dragging.

Black holes form when matter collapses to be so dense that spacetime encloses it within an event horizon. This means black holes aren’t physical objects in the way they are used to. They aren’t made of matter, but are rather a gravitational imprint of where matter was. The same is true for the gravitational collapse of rotating matter. When we talk about a rotating black hole, this doesn’t mean the event horizon is spinning like a top, it means that spacetime near the black hole is twisted into a gravitational echo of the once rotating matter. Which is where things get weird.

Suppose you were to drop a ball into a black hole. Not orbiting or rotating, just a simple drop straight down. Rather than falling in a straight line toward the black hole, the path of the ball will shift toward an orbital path as it falls, moving around the black hole ever faster as it gets closer. This effect is known as frame dragging. Part of the “rotation” of the black hole is transferred to the ball, even though the ball is in free fall. The closer the ball is to the black hole, the greater the effect.

This view of the M87 supermassive black hole in polarized light highlights the signature of magnetic fields. (Credit: EHT Collaboration)

A recent paper on the arXiv shows how this effect can transfer energy from a black hole’s magnetic field to nearby matter. Black holes are often surrounded by an accretion disk of ionized gas and dust. As the material of the disk orbits the black hole, it can generate a powerful magnetic field, which can superheat the material. While most of the power generated by this magnetic field is caused by the orbital motion, frame dragging can add an extra kick.

Essentially, a black hole’s magnetic field is generated by the bulk motion of the accretion disk. But thanks to frame dragging, the inner portion of the disk moves a bit faster than it should, while the outer portion moves a bit slower. This relative motion between them means that ionized matter moves relative to the magnetic field, creating a kind of dynamo effect. Thanks to frame dragging, the black hole creates more electromagnetic energy than you’d expect. While this effect is small for stellar mass black holes, it is large enough for supermassive black holes that we might see the effect in quasars through gaps in their power spectrum.

Reference: Okamoto, Isao, Toshio Uchida, and Yoogeun Song. “Electromagnetic Energy Extraction in Kerr Black Holes through Frame-Dragging Magnetospheres.” arXiv preprint arXiv:2401.12684 (2024).

The post Another Way to Extract Energy From Black Holes? appeared first on Universe Today.

Categories: Astronomy

Plastic Waste on our Beaches Now Visible from Space, Says New Study

Sun, 11/03/2024 - 8:28pm

According to the United Nations, the world produces about 430 million metric tons (267 U.S. tons) of plastic annually, two-thirds of which are only used for a short time and quickly become garbage. What’s more, plastics are the most harmful and persistent fraction of marine litter, accounting for at least 85% of total marine waste. This problem is easily recognizable due to the Great Pacific Garbage Patch and the amount of plastic waste that washes up on beaches and shores every year. Unless measures are taken to address this problem, the annual flow of plastic into the ocean could triple by 2040.

One way to address this problem is to improve the global tracking of plastic waste using Earth observation satellites. In a recent study, a team of Australian researchers developed a new method for spotting plastic rubbish on our beaches, which they successfully field-tested on a remote stretch of coastline. This satellite imagery tool distinguishes between sand, water, and plastics based on how they reflect light differently. It can detect plastics on shorelines from an altitude of more than 600 km (~375 mi) – higher than the International Space Station‘s (ISS) orbit.

The paper that describes their tool, “Beached Plastic Debris Index; a modern index for detecting plastics on beaches,” was recently published by the Marine Pollution Bulletin. The research team was led by Jenna Guffogg, a researcher at the Royal Melbourne Institute of Technology University (RMIT) and the Faculty of Geo-Information Science and Earth Observation (ITC) at the University of Twente. She was joined by multiple colleagues from both institutions. The study was part of Dr. Guffogg’s joint PhD research with the support of an Australian Government Research Training Program (RTP) scholarship.

Dr Jenna Guffogg said plastic on beaches can have severe impacts on wildlife and their habitats, just as it does in open waters. Credit: BPDI

According to current estimates, humans dump well over 10 million metric tons (11 million U.S. tons) of plastic waste into our oceans annually. Since plastic production continues to increase worldwide, these numbers are projected to increase dramatically. What ends up on our beaches can severely impact wildlife and marine habitats, just like the impact it has in open waters. If these plastics are not removed, they will inevitably fragment into micro and nano plastics, another major environmental hazard. Said Dr. Guffogg in a recent RMIT University press release:

“Plastics can be mistaken for food; larger animals become entangled, and smaller ones, like hermit crabs, become trapped inside items such as plastic containers. Remote island beaches have some of the highest recorded densities of plastics in the world, and we’re also seeing increasing volumes of plastics and derelict fishing gear on the remote shorelines of northern Australia.

“While the impacts of these ocean plastics on the environment, fishing and, tourism are well documented, methods for measuring the exact scale of the issue or targeting clean-up operations, sometimes most needed in remote locations, have been held back by technological limitations.”

Satellite technology is already used to track plastic garbage floating around the world’s oceans. This includes relatively small drifts containing thousands of plastic bottles, bags, and fishing nets, but also gigantic floating trash islands like the Great Pacific Garbage Patch. As of 2018, this garbage patch measured about 1.6 million km2 (620,000 mi2) and consisted of 45,000–129,000 metric tons (50,000–142,000 U.S. tons). However, the technology used to locate plastic waste in the ocean is largely ineffective at spotting plastic on beaches.

Geospatial scientists have found a way to detect plastic waste on remote beaches, bringing us closer to global monitoring options. Credit: RMIT

Much of the problem is that plastic can be mistaken for patches of sand when viewed from space. The Beached Plastic Debris Index (BPDI) developed by Dr. Guffogg and her colleagues circumvents this by employing a spectral index – a mathematical formula that analyzes patterns of reflected light. The BPDI is specially designed to map plastic debris in coastal areas using high-definition data from the WorldView-3 satellite, a commercial Earth observation satellite (owned by Maxar Technologies) that has been in operation since 2014.

Thanks to their efforts, scientists now have an effective way to monitor plastic on beaches, which could assist in clean-up operations. As part of the remote sensing team at RMIT, Dr. Guffogg and her colleagues have developed similar tools for monitoring forests and mapping bushfires from space. To validate the BPDI, the team field-tested it by placing 14 plastic targets on a beach in southern Gippsland, about 200 km (125 mi) southeast of Melbourne. Each target was made of a different type of plastic and measured two square meters (21.5 square feet) – smaller than the satellite’s pixel size of about three square meters.

The resulting images were compared to three other indices, two designed for detecting plastics on land and one for detecting plastics in aquatic settings. The BPDI outperformed all three as the others struggled to differentiate between plastics and sand or misclassified shadows and water as plastic. As study author Dr. Mariela Soto-Berelov explained, this makes the BPDI far more useful for environments where water and plastic-contaminated pixels are likely to coexist.  

“This is incredibly exciting, as up to now we have not had a tool for detecting plastics in coastal environments from space. The beauty of satellite imagery is that it can capture large and remote areas at regular intervals. Detection is a key step needed for understanding where plastic debris is accumulating and planning clean-up operations, which aligns with several Sustainable Development Goals, such as Protecting Seas and Oceans.”  

The next step is to test the BPDI tool in real-life scenarios, which will consist of the team partnering with various organizations dedicated to monitoring and addressing the plastic waste problem.

Further Reading: RMIT, Marine Pollution Bulletin

The post Plastic Waste on our Beaches Now Visible from Space, Says New Study appeared first on Universe Today.

Categories: Astronomy

Future Space Telescopes Could be Made From Thin Membranes, Unrolled in Space to Enormous Size

Sun, 11/03/2024 - 12:05pm

Space-based telescopes are remarkable. Their view isn’t obscured by the weather in our atmosphere, and so they can capture incredibly detailed images of the heavens. Unfortunately, they are quite limited in mirror size. As amazing as the James Webb Space Telescope is, its primary mirror is only 6.5 meters in diameter. Even then, the mirror had to have foldable components to fit into the launch rocket. In contrast, the Extremely Large Telescope currently under construction in northern Chile will have a mirror more than 39 meters across. If only we could launch such a large mirror into space! A new study looks at how that might be done.

As the study points out, when it comes to telescope mirrors, all you really need is a reflective surface. It doesn’t need to be coated onto a thick piece of glass, nor does it need a big, rigid support structure. All that is just needed to hold the shape of the mirror against its own weight. As far as starlight is concerned, the shiny surface is all that matters. So why not just use a thin sheet of reflective material? You could just roll it up and put it in your launch vehicle. We could, for example, easily launch a 40-meter roll of aluminum foil into space.

Of course, things aren’t quite that simple. You would still need to unroll your membrane telescope back into its proper shape. You would also need a detector to focus the image upon, and you’d need a way to keep that detector in the correct alignment with the broadsheet mirror. In principle, you could do that with a thin support structure, which wouldn’t add an excessive bulk to your telescope. But even if we assume all of those engineering problems could be solved, you’d still have a problem. Even in the vacuum of space, the shape of such a thin mirror would deform over time. Solving this problem is the main focus of this new paper.

Once launched into space and unfurled, the membrane mirror wouldn’t deform significantly. But to capture sharp images, the mirror would have to maintain focus on the order of visible light. When the Hubble was launched, its mirror shape was off by less than the thickness of a human hair, and it took correcting lenses and an entire shuttle mission to fix. Any shifts on that scale would render our membrane telescope useless. So the authors look to a well-used trick of astronomers known as adaptive optics.

How radiative adaptive optics might work. Credit: Rabien, et al

Adaptive optics is used on large ground-based telescopes as a way to correct for atmospheric distortion. Actuators behind the mirror distort the mirror’s shape in real time to counteract the twinkles of the atmosphere. Essentially, it makes the shape of the mirror imperfect to account for our imperfect view of the sky. A similar trick could be used for a membrane telescope, but if we had to launch a complex actuator system for the mirror, we might as well go back to launching rigid telescopes. But what if we simply use laser projection instead?

By shining a laser projection onto the mirror, we could alter its shape through radiative recoil. Since it is simply a thin membrane, the shape would be significant enough to create optical corrections, and it could be modified in real time to maintain the mirror’s focus. The authors call this technique radiative adaptive optics, and through a series of lab experiments have demonstrated that it could work.

Doing this in deep space is much more complicated than doing it in the lab, but the work shows the approach is worth exploring. Perhaps in the coming decades we might build an entire array of such telescopes, which would allow us to see details in the distant heavens we can now only imagine.

Reference: Rabien, S., et al. “Membrane space telescope: active surface control with radiative adaptive optics.” Space Telescopes and Instrumentation 2024: Optical, Infrared, and Millimeter Wave. Vol. 13092. SPIE, 2024.

The post Future Space Telescopes Could be Made From Thin Membranes, Unrolled in Space to Enormous Size appeared first on Universe Today.

Categories: Astronomy

Voyager 1 is Forced to Rely on its Low Power Radio

Sat, 11/02/2024 - 7:01pm

Voyager 1 was launched waaaaaay back in 1977. I would have been 4 years old then! It’s an incredible achievement that technology that was built THAT long ago is still working. Yet here we are in 2024, Voyager 1 and 2 are getting older. Earlier this week, NASA had to turn off one of the radio transmitters on Voyager 1. This forced communication to rely upon the low-power radio. Alas technology around 50 years old does sometimes glitch and this was the result of a command to turn on a heater. The result was that Voyager 1 tripped into fault protection mode and switch communications! Oops. 

Voyager 1 is a NASA space probe launched on September 5, 1977, as part of the Voyager program to study the outer planets and beyond. Initially, Voyager 1’s mission focused on flybys of Jupiter and Saturn, capturing incredible images before traveling outward. In 2012, it became the first human-made object to enter interstellar space, crossing the heliopause—the boundary between the influence of the Sun and interstellar space. It now continues to  to send data back to Earth from over 22 billion km  away, helping scientists learn about the interstellar medium. There is also a “Golden Record” onboard which contains sounds and images of life on Earth, Voyager 1 serves as a time capsule, intended to articulate the story of our world to any alien civilizations that may encounter it.

The Ringed Planet Saturn

Just a few days ago on 24 October, NASA had to reconnect to Voyager 1 on its outward journey because one of its radio transmitters had been turned off! Alien intervention perhaps! Exciting though that would be, alas not. 

The transmitter seems to have been turned off as a result of one of the spacecraft fault protection systems. Any time there is an issue with onboard systems the computer will flip the systems into protection mode to protect any further damage. If the spacecraft draws too much power from the batteries, the same system will turn off less critical systems to conserve power. When the fault protection system kicks in, it’s then the job of engineers on the ground fixing the fault.

Artist rendition of Voyager 1 entering interstellar space. (Credit: NASA/JPL-Caltech)

There are challenges here though. Due to the immense distance to Voyager 1, now about 24 billion km away, any communications to or from takes almost 23 hours to arrive. A request for data for example means a delay of 46 hours before the request arrives and the data returned! Undaunted, the team sent commands to Voyager 1 on the 16 October to turn on a heater but, whilst the probe should have had enough power, the command triggered the system to turn off a radio transmitter to conserve power. This was discovered on 18 October when the Deep Space Network was no longer able to detect the usual ping from the spacecraft. 

The engineers correctly identified the likely cause of the problem and found Voyager pinging away on a different frequency using the alternate radio transmitte. This one hadn’t been used since the early 19080’s! With the fault identified, the team did not switch immediately back to the original transmitter just yet in case the fault triggered again. Instead,they are now working to understand the fault before switching back. 

Until then, Voyager 1 will continue to communicate with Earth using the lower power transmitter as it continues its exploration out into interstellar space. 

Source : After Pause, NASA’s Voyager 1 Communicating With Mission Team

The post Voyager 1 is Forced to Rely on its Low Power Radio appeared first on Universe Today.

Categories: Astronomy

Webb Confirms a Longstanding Galaxy Model

Sat, 11/02/2024 - 11:05am

Perhaps the greatest tool astronomers have is the ability to look backward in time. Since starlight takes time to reach us, astronomers can observe the history of the cosmos by capturing the light of distant galaxies. This is why observatories such as the James Webb Space Telescope (JWST) are so useful. With it, we can study in detail how galaxies formed and evolved. We are now at the point where our observations allow us to confirm long-standing galactic models, as a recent study shows.

This particular model concerns how galaxies become chemically enriched. In the early universe, there was mostly just hydrogen and helium, so the first stars were massive creatures with no planets. They died quickly and spewed heavier elements, from which more complex stars and planets could form. Each generation adds more elements to the mix. But as a galaxy nurtures a menagerie of stars from blue supergiants to red dwarfs, which stars play the greatest role in chemical enrichment?

One model argues that it is the most massive stars. This makes sense because giant stars explode as supernovae when they die. They toss their enriched outer layers deep into space, allowing the material to mix within great molecular clouds from which new stars can form. But about 20 years ago, another model argued that smaller, more sunlike stars played a greater role.

The Cat’s Eye nebula is a remnant of an AGB star. Credit: ESA, NASA, HEIC and the Hubble Heritage Team, STScI/AURA

Stars like the Sun don’t die in powerful explosions. Billions of years from now, the Sun will swell into a red giant star. In a desperate attempt to keep burning, the core of a sun-like star heats up significantly to fuse helium, and its diffuse outer layers swell. On the Hertzsprung-Russell diagram, they are known as asymptotic giant branch (AGB) stars. While each AGB star might toss less material into interstellar space, they are far more common than giant stars. So, the model argues, AGB stars play a greater role in the enrichment of galaxies.

Both models have their strengths, but proving the AGB model over the giant star model would prove difficult. It’s easy to observe supernovae in galaxies billions of light years away. Not so much with AGB stars. Thanks to the JWST, we can now test the AGB model.

Using JWST the study looked at the spectra of three young galaxies. Since the Webb’s NIRSpec camera can capture high-resolution infrared spectra, the team could see not just the presence of certain elements but their relative abundance. They found a strong presence of carbon and oxygen bands, which is common for AGB remnants, but also the presence of more rare elements such as vanadium and zirconium. Taken altogether, this points to a type of AGB star known as thermally pulsing AGBs, or TP-AGBs.

Many red giant stars enter a pulsing phase at the end of their lives. The hot core swells the outer layers, things cool down a bit, and gravity compresses the star a bit, which heats the core, and the whole process starts over. This study indicates that TP-AGBs are particularly efficient at enriching galaxies, thus confirming the 20-year-old model.

Reference: Lu, Shiying, et al. “Strong spectral features from asymptotic giant branch stars in distant quiescent galaxies.” Nature Astronomy (2024): 1-13.

The post Webb Confirms a Longstanding Galaxy Model appeared first on Universe Today.

Categories: Astronomy

The Aftermath of a Neutron Star Collision Resembles the Conditions in the Early Universe

Sat, 11/02/2024 - 11:04am

Neutron stars are extraordinarily dense objects, the densest in the Universe. They pack a lot of matter into a small space and can squeeze several solar masses into a radius of 20 km. When two neutron stars collide, they release an enormous amount of energy as a kilonova.

That energy tears atoms apart into a plasma of detached electrons and atomic nuclei, reminiscent of the early Universe after the Big Bang.

Even though kilonova are extraordinarily energetic, they’re difficult to observe and study because they’re transient and fade quickly. The first conclusive kilonova observation was in 2017, and the event is named AT2017gfo. AT stands for Astronomical Transient, followed by the year it was observed, followed by a sequence of three letters that are assigned to uniquely identify the event.

New research into AT2017gfo has uncovered more details of this energetic event. The research is “Emergence hour-by-hour of r-process features in the kilonova AT2017gfo.” It’s published in the journal Astronomy and Astrophysics, and the lead author is Albert Sneppen from the Cosmic Dawn Center (DAWN) and the Niels Bohr Institute, both in Copenhagen, Denmark.

A kilonova explosion creates a spherical ball of plasma that expands outward, similar to the conditions shortly after the Big Bang. Plasma is made up of ions and electrons, and the intense heat prevents them from combining into atoms.

However, as the plasma cools, atoms form via nucleosynthesis, and scientists are intensely interested in this process. There are three types of nucleosynthesis: slow neutron capture (s-process), proton process (p-process), and rapid neutron capture (r-process). Kilonovae form atoms through the r-process and are known for forming heavier elements, including gold, platinum, and uranium. Some of the atoms they form are radioactive and begin to decay immediately, and this releases the energy that makes a kilonova so luminous.

This study represents the first time astronomers have watched atoms being created in a kilonova.

“For the first time we see the creation of atoms.”

Rasmus Damgaard, co-author, PhD student at Cosmic DAWN Center

Things happen rapidly in a kilonova, and no single telescope on Earth can watch as it plays out because the Earth’s rotation removes it from view.

“This astrophysical explosion develops dramatically hour by hour, so no single telescope can follow its entire story. The viewing angle of the individual telescopes to the event is blocked by the rotation of the Earth,” explained lead author Sneppen.

This research is based on multiple ground telescopes that each took their turn watching the kilonova as Earth rotated. The Hubble also contributed observations from its perch in low-Earth orbit.

“But by combining the existing measurements from Australia, South Africa and The Hubble Space Telescope, we can follow its development in great detail,” Sneppen said. “We show that the whole shows more than the sum of the individual sets of data.”

As the plasma cools, atoms start to form. This is the same thing that happened in the Universe after the Big Bang. As the Universe expanded and cooled and atoms formed, light was able to travel freely because there were no free electrons to stop it. AT2017gfo produced

The research is based on spectra collected from 0.5 to 9.4 days after the merger. The observations focused on optical and near-infrared (NIR) wavelengths because, in the first few days after the merger, the ejecta is opaque to shorter wavelengths like X-rays and UV. Optical and NIR are like open windows into the ejecta. They can observe the rich spectra of newly-formed elements, which are a critical part of kilonovae.

This figure from the research shows how different telescopes contributed to the observations of AT2017gfo. Image Credit: Sneppen et al. 2024.

The P Cygni spectral line is also important in this research. It indicates that a star, or in this case, a kilonova, has an expanding shell of gas around it. It’s both an emission line and an absorption line and has powerful diagnostic capabilities. Together, they reveal velocity, density, temperature, ionization, and direction of flow.

Strontium plays a strong role in this research and in kilonovae. It produces strong emission and absorption features in Optical/NIR wavelengths, which also reveal the presence of other newly formed elements. These spectral lines do more than reveal the presence of different elements. Along with P Cygni, they’re used to determine the velocity of the ejecta, the velocity structures in the ejecta, and the temperature conditions and ionization states.

The spectra from AT2017gfo are complex and anything but straightforward. However, in all that light data, the researchers say they’ve identified elements being synthesized, including Tellurium, Lanthanum, Cesium, and Yttrium.

“We can now see the moment where atomic nuclei and electrons are uniting in the afterglow. For the first time we see the creation of atoms, we can measure the temperature of the matter and see the micro physics in this remote explosion. It is like admiring the cosmic background radiation surrounding us from all sides, but here, we get to see everything from the outside. We see before, during and after the moment of birth of the atoms,” says Rasmus Damgaard, PhD student at Cosmic DAWN Center and co-author of the study.

“The matter expands so fast and gains in size so rapidly, to the extent where it takes hours for the light to travel across the explosion. This is why, just by observing the remote end of the fireball, we can see further back in the history of the explosion,” said Kasper Heintz, co-author and assistant professor at the Niels Bohr Institute.

The kilonova produced about 16,000 Earth masses of heavy elements, including 10 Earth masses of the elements gold and platinum.

Neutron star mergers also create black holes, and AT2017gfo created the smallest one ever observed, though there’s some doubt. The gravitational wave GW170817 is associated with the kilonova and was detected by LIGO in August 2017. It was the first time a GW event was seen in conjunction with its electromagnetic counterpart. Taken together, the GW data and other observations suggest that a black hole was created, but overall, there’s uncertainty. Some researchers think a magnetar may be involved.

This artist’s illustration shows a neutron star collision that, in addition to the radioactive fire cloud, leaves behind a black hole and jets of fast-moving material from its poles. Illustration: O.S. SALAFIA, G. GHIRLANDA, CXC/NASA, GSFC, B. WILLIAMS ET AL

Kilonovae are complex objects. They’re like mini-laboratories where scientists can study extreme nuclear physics. Kilonovae are important contributors of heavy elements in the Universe, and researchers are keen to model and understand how elements are created in these environments.


The post The Aftermath of a Neutron Star Collision Resembles the Conditions in the Early Universe appeared first on Universe Today.

Categories: Astronomy

New View of Venus Reveals Previously Hidden Impact Craters

Sat, 11/02/2024 - 8:01am

Think of the Moon and most people will imagine a barren world pockmarked with craters. The same is likely true of Mars albeit more red in colour than grey! The Earth too has had its fair share of craters, some of them large but most of the evidence has been eroded by centuries of weathering. Surprisingly perhaps, Venus, the second planet from the Sun does not have the same weathering processes as we have on Earth yet there are signs of impact craters, but no large impact basins! A team of astronomers now think they have secured a new view on the hottest planet in the Solar System and revealed the missing impact sites. 

Venus is the second planet from the Sun and, whilst it’s often called Earth’s sister planet, the reality is really they differ in many ways. The term comes from similarities in size and composition yet the conditions on Venus are far more hostile. Surface temperatures far exceed the boiling point of water, the dense atmosphere exerts a pressure on the surface equivalent to being 3,000 feet under water and there is sulphuric acid rain in the atmosphere! Most definitely not a nice place to head to for your next vacation. 

Venus

If you were to stand on the surface of Venus you would see beautifully formed craters. Looking down on the planet from orbit you would see none due to the thick, dense atmosphere. Yet if you could gaze through the obscuring clouds you would see a distinct lack of larger impact basins of the sort we are familiar with on the Moon. Now, a team of researchers mostly from the Planetary Science Institute believe they solve the mystery of the missing craters. 

The Moon. Credit: NASA

They have mapped a region of Venus known as Haastte-baad Tessera using radar technology and the results were rather surprising. The region is thought to be one of the oldest surfaces on Venus and is classed as tessera terrain. This type of feature is complex and is characterised by rough, intersecting ridges to create a tile like pattern thought to be the result of a thin but strong layer of material forming over a weak layer which can flow and convect energy just like boiling water. Images from the area in question reveal a set of concentric rings over 1,400 km across at their widest. The team propose that the feature is the result of two back-to-back impact events. “Think of pea soup with a scum forming on top,” said Vicki Hansen,  Planetary Science Institute Senior Scientist. 

Obviously there is no pea soup on Venus but instead, the thin crust layer formed upon a layer of molten lava. Venus of today has a thick outer shell called a lithosphere which is about 112 km thick but when Venus was younger, its thought it was just 9km thick! If an impactor struck the hot young Venus then it’s very likely it would have fractured the lithosphere allowing molten lava to seep through and eventually solidify to create the tesserae we see today. 

Confusing things slightly however is that features like this have been seen on top of flat, raised plateaus where the lithosphere is likely much thicker. The researchers have an answer for this though, “When you have vast amounts of partial melt in the mantle that rushes to the surface, what gets left behind is something called residuum. Solid residuum is much stronger than the adjacent mantle, which did not experience partial melting.” said Hansen. “What may be surprising is that the solid residuum is also lower density than all the mantle around it. So, it’s stronger, but it’s also buoyant. You basically have an air mattress sitting in the mantle beneath your lava pond, and it’s just going to rise up and raise that tessera terrain.”

The features found by the time seem to show that two impact events happened one after the other with the first creating the build up of lava and the second creating the ring structure seen today. 

Source : Impact craters were hiding in plain sight, say researchers with a new view of Venus

The post New View of Venus Reveals Previously Hidden Impact Craters appeared first on Universe Today.

Categories: Astronomy

Multimode Propulsion Could Revolutionize How We Launch Things to Space

Fri, 11/01/2024 - 9:04pm

In a few years, as part of the Artemis Program, NASA will send the “first woman and first person of color” to the lunar surface. This will be the first time astronauts have set foot on the Moon since the Apollo 17 mission in 1972. This will be followed by the creation of permanent infrastructure that will allow for regular missions to the surface (once a year) and a “sustained program of lunar exploration and development.” This will require spacecraft making regular trips between the Earth and Moon to deliver crews, vehicles, and payloads.

In a recent NASA-supported study, a team of researchers at the University of Illinois Urbana-Champaign investigated a new method of sending spacecraft to the Moon. It is known as “multimode propulsion,” a method that integrates a high-thrust chemical mode and a low-thrust electric mode – while using the same propellant. This system has several advantages over other forms of propulsion, not the least of which include being lighter and more cost-effective. With a little luck, NASA could rely on multimode propulsion-equipped spacecraft to achieve many of its Artemis objectives.

The paper describing their investigation, “Indirect optimal control techniques for multimode propulsion mission design,” was recently published in Acta Astronautica. The research was led by Bryan C. Cline, a doctoral student in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign. He was joined by fellow aerospace engineer and PhD Candidate Alex Pascarella, and Robyn M. Woollands and Joshua L. Rovey – an assistant professor and professor with the Grainger College of Engineering (Aerospace Engineering).

Artist’s impression of the ESA LISA Pathfinder mission. Credit: ESA–C.Carreau

To break it down, a multimode thruster relies on a single chemical monopropellant – like hydrazine or Advanced Spacecraft Energetic Non-Toxic (ASCENT) propellant – to power chemical thrusters and an electrospray thruster (aka. colloid thruster). The latter element relies on a process known as electrospray ionization (ESI), where charged liquid droplets are produced and accelerated by a static electric field. Electrospray thrusters were first used in space aboard the ESA’s LISA Pathfinder mission to demonstrate disturbance reduction.

By developing a system that relies on both that can switch as needed, satellites will be able to perform propulsive manuevers using less propellant (aka. minimum-fuel transfers). As Cline said in a Grainger College of Engineering press release:

“Multimode propulsion systems also expand the performance envelope. We describe them as flexible and adaptable. I can choose a high-thrust chemical mode to get someplace fast and a low-thrust electrospray to make smaller maneuvers to stay in the desired orbit. Having multiple modes available has the potential to reduce fuel consumption or reduce time to complete your mission objective.”

The team’s investigation follows a similar study conducted by Cline and researchers from NASA’s Goddard Spaceflight Center and the aerospace advisory company Space Exploration Engineering, LLC. In a separate paper, “Lunar SmallSat Missions with Chemical-Electrospray Multimode Propulsion,” they considered the advantages of multimode propulsion against all-chemical and all-electric approaches for four design reference missions (DRMs) provided by NASA. For this latest investigation, Cline and his colleagues used a standard 12-unit CubeSat to execute these four mission profiles.

.Earth–Mars minimum-fuel trajectory when the CubeSat is coasting, as well as in mode 1-low thrust and mode 2-high thrust. Credit: UIU-C

“We showed for the first time the feasibility of using multimode propulsion in NASA-relevant lunar missions, particularly with CubeSats,” said Cline. “Other studies used arbitrary problems, which is a great starting point. Ours is the first high-fidelity analysis of multimode mission design for NASA-relevant lunar missions.”

Multimode propulsion is similar in some respects to hybrid propulsion, where two propulsion systems are combined to achieve optimal thrust. A good example of this (though still unrealized) is bimodal nuclear propulsion, where a spacecraft relies on a nuclear-thermal propulsion (NTP) and nuclear-electric propulsion (NEC) system. While an NTP system relies on a nuclear reactor to heat hydrogen or deuterium propellant and can achieve a high rate of acceleration (delta-v), an NEC system uses the reactor to power an ion engine that offers a consistent level of thrust.

A key advantage multimode propulsion has over a hybrid system is a drastic reduction in the dry mass of the spacecraft. Whereas hybrid propulsion systems require two different propellants (and hence, two separate fuel tanks), bimodal propulsion requires only one. This not only saves on the mass and volume of the spacecraft, but makes them cheaper to launch. “I can choose to use high-thrust at any time and low-thrust at any time, and it doesn’t matter what I did in the past,” said Cline. “With a hybrid system, when one tank is empty, I can’t choose that option.”

To complete each of the design reference missions for this project, the team made all decisions manually – i.e., when to use high-thrust and low-thrust. As a result, the trajectories weren’t optimal. This led Cline to develop an algorithm after completing the project that automatically selects which mode would lead to an optimal trajectory. This allowed Cline and his team to solve a simple two-dimensional transfer between Earth and Mars and a three-dimensional transfer to geostationary orbit that minimizes fuel consumption. As Cline explained:

“This was an entirely different beast where the focus was on the development of the method, rather than the specific results shown in the paper. We developed the first indirect optimal control technique specifically for multimode mission design. As a result, we can develop transfers that obey the laws of physics while achieving a specific objective such as minimizing fuel consumption or transfer time.”

“We showed the method works on a mission that’s relevant to the scientific community. Now you can use it to solve all kinds of mission design problems. The math is agnostic to the specific mission. And because the method utilizes variational calculus, what we call an indirect optimal control technique, it guarantees that you’ll get at least a locally optimal solution.”

Artist rendering of an Artemis astronaut exploring the Moon’s surface during a future mission. Credit: NASA

The research is part of a project led by Professor Rovey and a multi-institutional team known as the Joint Advanced Propulsion Institute (JANUS). Their work is funded by NASA as part of a new Space and Technology Research Institute (STRI) initiative. Rovey is responsible for leading the Diagnostics and Fundamental Studies team, along with Dr. John D. Williams, a Professor of Mechanical Engineering and the Director of the Electric Propulsion & Plasma Engineering Laboratory at Colorado State University (CSU).

As Cline indicated, their work into multimode propulsion could revolutionize how small spacecraft travel between Earth and the Moon, Mars, and other celestial bodies:

“It’s an emerging technology because it’s still being developed on the hardware side. It’s enabling in that we can accomplish all kinds of missions we wouldn’t be able to do without it. And it’s enhancing because if you’ve got a given mission concept, you can do more with multimode propulsion. You’ve got more flexibility. You’ve got more adaptability.

“I think this is an exciting time to work on multimode propulsion, both from a hardware perspective, but also from a mission design perspective. We’re developing tools and techniques to take this technology from something we test in the basement of Talbot Lab and turn it into something that can have a real impact on the space community.”

Further ReadingL University of Illinois Urbana-Champaign, Acta Astronautica

The post Multimode Propulsion Could Revolutionize How We Launch Things to Space appeared first on Universe Today.

Categories: Astronomy

China Trains Next Batch of Taikonauts

Fri, 11/01/2024 - 8:31pm

China has a fabulously rich history when it comes to space travel and was among the first to experiment in rocket technology. The invention of the rocket is often attributed to the Sung Dynasty (AD 960-1279.) Since then, China has been keen to develop and build its own space industry. The Chinese National Space Administration has already successfully landed probes on the Moon but is preparing for their first human landers. Chinese astronauts are sometimes known as taikonauts and CNSA has just confirmed their fourth batch of taikonauts are set for a lunar landing. 

The Chinese National Space Administration (CNSA) is China’s equivalent to NASA. It was founded in 1993 to oversee the country’s space aspirations. Amazing results have been achieved over the last twenty years including the landmark Chang’e lunar missions. In 2019 Chang’e-4 landed on the far side of the Moon, the first lunar lander to do so and in 2021 became the third country to land a rover on Mars. In 2021 the first modules for CNSA’s Tiangong space station were launched, it’s now operational and working with other space agencies, is working on a number of scientific research projects. 

China has announced that it successfully completed its latest selection process in May. The CNSA are striving to expand their team of taikonauts. Ten were chosen from all the applicants including 8 experienced space pilots and two payload specialists. The team will now begin their program of training in August covering over 200 subject areas designed to prepare them for future missions to the Moon and other Chinese space initiatives. 

The training covers an extensive range of skills It will include training for living and working in microgravity, to learn about physical and mental health in space and specialist training in extravehicular activities. They will also learn maintenance techniques for advanced spacecraft systems and in hands-on training for undertaking experiments in microgravity. 

On her 2007 mission aboard the International Space Station, NASA astronaut Peggy Whitson, Expedition 16 commander, worked on the Capillary Flow Experiment (CFE), which observes the flow of fluid, in particular capillary phenomena, in microgravity. Credits: NASA

The program is designed to expand and fine tune the skills of the taikonauts in preparation for future crewed lunar missions. Specialist training for lunar landings include piloting spacecraft under different gravitational conditions, manoeuvring lunar rovers, training in celestial navigation and stellar identification. 

Not only will they learn about space operations but they will have to learn skills to support scientific objectives too. This will include how to conduct geological surveys and how to operate tools and manoeuvre in the micro-gravitational environments. 

Source : China’s fourth batch of taikonauts set for lunar landings

The post China Trains Next Batch of Taikonauts appeared first on Universe Today.

Categories: Astronomy

NASA Focusses in on Artemis III Landing Sites.

Fri, 11/01/2024 - 7:14pm

It was 1969 that humans first set foot on the Moon. Back then, the Apollo mission was the focus of the attempts to land on the Moon but now, over 50 years on, it looks like we are set to head back. The Artemis project is the program that hopes to take us back to the Moon again and it’s going from strength to strength. The plan is to get humans back on the Moon by 2025 as part of Artemis III. As a prelude to this, NASA is now turning its attention to the possible landing sites. 

The Artemis Project is NASA’s program aimed at returning humans to the Moon and establishing a permanent base there. Ultimately with a view to paving the way for missions to Mars. With the first launch in 2017, Artemis intends to land “the first woman and the next man” on the lunar surface by 2025.  The program began with Artemis I and an uncrewed mission which orbited the Moon. Arte is II will take astronauts on an orbit of the Moon and finally Artemis III will land humans back on the Moon by 2025. At the heart of the program is the giant Space Launch System (SLS) rocket and the Orion spacecraft. 

NASA’s Space Launch System rocket carrying the Orion spacecraft launches on the Artemis I flight test, Wednesday, Nov. 16, 2022, from Launch Complex 39B at NASA’s Kennedy Space Center in Florida. Credit: NASA/Joel Kowsky.

As the plans ramp up for the first crewed landing, NASA are now analysing possible landing sites and have identified nine potential spots. They are all near the South Pole of the Moon and will provide Artemis III with landing sites near to potentially useful resources. Further investigations will be required to further assess them for their suitability. 

The team working upon the analysis is the Cross Agency Site Selection Analysis team and they will work with other science and industry partners. The teams will explore each possible site for science value and suitability for the mission including the availability of water ice. The final list so far, and in no particular order, are;

  • Peak near Cabeus B
  • Haworth
  • Malapert Massif
  • Mons Mouton Plateau
  • Mons Mouton
  • Nobile Rim 1
  • Nobile Rim 2
  • de Gerlache Rim 2
  • Slater Plain

The South Polar region was chosen as a region was chosen chiefly because it has water locked up deep in the shadowed craters. The Apollo missions never visited that region of the Moon either so it is a great opportunity for humans to explore this aged region of the lunar surface. To settle on these 9 areas, the team assessed various regions of the south polar region using potential launch window suitability, terrain suitability, communication capability and even lighting levels. The geology team also looked at the landing sites to assess their scientific value 

Apollo 17 astronaut Harrison Schmitt collecting a soil sample, his spacesuit coated with dust. Credit: NASA

NASA will finally settle on the appropriate landing site based upon the decision for the launch date. Once that has been confirmed it will determine the transfer trajectories to the Moon, the orbital paths and the surface environment. 

Source : NASA Provides Update on Artemis III Moon Landing Regions

The post NASA Focusses in on Artemis III Landing Sites. appeared first on Universe Today.

Categories: Astronomy

The Connection Between Black Holes and Dark Energy is Getting Stronger

Fri, 11/01/2024 - 5:50pm

The discovery of the accelerated expansion of the Universe has often been attributed to the force known as dark energy. An intriguing new theory was put forward last year to explain this mysterious force; black holes could be the cause of dark energy! The theory goes on to suggest as more black holes form in the Universe, the stronger the pressure from dark energy. A survey from the Dark Energy Spectroscopic Instrument (DESI) seems to support the theory. The data from the first year of operation shows the density of dark energy increases over time and seems to correlate with the number and mass of black holes! 

Cast your mind back 4 billion years to the beginning of the Universe. Just after the Big Bang, the moment when the Universe popped into existence, there was a brief period when the Universe expanded faster than the speed of light. Before you argue that nothing can travel faster than the speed of light we are talking of the very fabric of space and time expanding faster than the speed of light. The speed of light limit relates to travel through the fabric of space, not the fabric of space itself! This was the inflationary period. 

This illustration shows the “arrow of time” from the Big Bang to the present cosmological epoch. Credit: NASA

The energy that drove the expansion in the early Universe shared similarities with dark energy, the repulsive force that seems to permeate the Universe and is driving the current day accelerated expansion of the Universe.

What is dark energy though? It is thought to make up around 68% of the Universe and, unlike normal matter and energy seems to have a repulsive force rather than attractive. The repulsive nature was first inferred from observations in the late 1990’s when astronomers deduced the rate of acceleration when observing distant supernova. As to the nature of dark energy, no-one really knows what it is or what it comes from, that is, until now. 

Artist’s illustration of a bright and powerful supernova explosion. (Credit: NASA/CXC/M.Weiss)

A team of researchers from the University of Michigan and other institutions have published a paper in the Journal of Cosmology and Astroparticle Physics. In their paper they propose that black holes are the source of dark energy. Professor Gregory Tarle said ‘Where in the later Universe do we see gravity as strong as it was at the beginning of the Universe?’ The answer, Tarle goes on to describe is the centre of black holes. Tarle and team propose that what happened during the inflation period runs in reverse during the collapse of a massive star. When this happens, the matter could conceivably become dark energy. 

The team have used data from the Dark Energy Spectroscopic Instrument (DESI) which is mounted upon the 4m Mayall telescope at Kitt Peak National Observatory. The instrument is essentially 5,000 computer controlled fibre optics which cover an area of the sky equal to about 8 square degrees. The evidence of dark energy is achieved by studying tens of millions of galaxies. The galaxies are so far way their light takes billions of years to reach us. We can use the information to determine how fast the Universe is expanding with unprecedented precision. 

Stu Harris works on assembling the focal plane for the Dark Energy Spectroscopic Instrument (DESI), which involves hundreds of thousands of parts, at Lawrence Berkeley National Laboratory on Wednesday, 6 December, 2017 in Berkeley, Calif.

The data shows evidence that dark energy has increased with time. This is not perhaps in itself surprising but it seems to accurately mirror the increase in black holes over time too. Now that DESI is operational, more observations are required to hunt down the black holes and try to quantify their growth over time to see if there really is merit in this new exciting hypothesis. 

Source : Evidence mounts for dark energy from black holes

The post The Connection Between Black Holes and Dark Energy is Getting Stronger appeared first on Universe Today.

Categories: Astronomy

Will Advanced Civilizations Build Habitable Planets or Dyson Spheres

Fri, 11/01/2024 - 2:46pm

If there are alien civilizations in the Universe, some of them could be super advanced. So advanced that they can rip apart planets and create vast shells surrounding a star to capture all its energy. These Dyson spheres should be detectable by modern telescopes. Occasionally astronomers find an object that resembles such an alien megastructure, but so far, they’ve all turned out to be natural objects. As best we can tell, there are no Dyson spheres out there.

And when you think about it, building a Dyson sphere is the cosmic endgame of a capitalist dystopia. In the never-ending quest to capture and consume every last bit of energy, your civilization rips worlds asunder, moving heaven and earth to create an orbitally unstable, unlivable engine. If you can traverse light-years and transform planets, why not just move Earth-like planets and moons into a star’s habitable zone and have a nice cluster of comfy planets to live on? If this kind of stellar-punk civilization is out there, could astronomers detect it? This is the question behind a study on the arXiv.

The authors begin by noting that when Freeman Dyson proposed the idea in 1960, our solar system was the only known planetary system. Star systems were thought to be rare at the time, but now we know better. Most stars have planets, and even our solar system has a dozen water-rich moons that could be made habitable with a shift of their orbits and a bit of terraforming. Since this would be much easier than building a Dyson sphere, the authors argue that modified systems should be much more common. The only question is how to detect them.

One way would be to look for planetary systems that don’t seem to have formed naturally. For example, if you find a system with a dozen worlds in a star’s habitable zone and few other planets, that isn’t likely to have happened by chance. Less obvious would be to look for systems that are orbitally unusual. Perhaps the planets have orbital resonances that aren’t stable in the long term, or have unusually perfect orbits. Maybe the chemical composition of some worlds don’t match that of the system as a whole. Anything that stands out might be worth a closer look.

Using lasers to change a planet’s orbit. Credit: Narasimha, et al

Another way would be to look for signs of systems under construction. The authors note that planets could be moved or captured slowly over time using high-power directional lasers to accelerate them. Stray light from those lasers would be visible across light years. If we detect monochromatic laser light coming from a potentially habitable star, it could be aliens building a better home.

It’s not likely that we’ll find this kind of evidence, but the idea is no stranger than those of giant alien megastructures. Besides, it’s fun to think about just how many habitable planets you could pack into a single star system. It turns out to be quite a lot!

Reference: Narasimha, Raghav, Margarita Safonova, and C. Sivaram. “Making Habitable Worlds: Planets Versus Megastructures.” arXiv preprint arXiv:2309.06562 (2023).

The post Will Advanced Civilizations Build Habitable Planets or Dyson Spheres appeared first on Universe Today.

Categories: Astronomy