I can calculate the motions of the heavenly bodies, but not the madness of people

— Sir Isaac Newton

Universe Today

Syndicate content Universe Today
Space and astronomy news
Updated: 3 hours 11 min ago

Good News, the Ozone Layer Hole is Continuing to Shrink

Mon, 11/11/2024 - 8:59am

Climate change is a huge topic and often debated across the world. We continue to burn fossil fuels and ignore our charge toward human driven climate change but while our behaviour never seems to improve, something else does! For the last few decades we have been pumping chlorofluorocarbons into the atmosphere causing a hole in the ozone layer to form. Thanks largely to worldwide regulation changes and a reduction in the use of these chemicals, the hole it seems is finally starting to get smaller. 

The ozone layer is the protective shield in Earth’s stratosphere. It’s about 15 to 35 kilometres above the Earth and its presence helps to protect us by absorbing harmful ultraviolet radiation. The region is mostly ozone composed of three oxygen molecules and it filters out the UV-B and UV-C radiation which can lead to skin cancer, cataracts and can even damage parents crops. The rest of the atmosphere is composed mostly of nitrogen gas (78%), oxygen (21%) and a few other gasses making up the remaining 1%. 

A view of Earth’s atmosphere from space. Credit: NASA

In the late 20th century scientists found that certain chemicals like the well known chlorofluorocarbons (CFC’s) were slowly destroying the layer. This resulted in seasonal holes appearing in the ozone especially over Antarctica. In 1987, the Montreal Protocol international treaty was signed to curb the global release of CFC’s and other ozone harmful gas. 

Just recently, a team of scientists from NASA and the National Oceanic and Atmospheric Administration (NOAA) have confirmed that the hole in the ozone layer over the south pole was relatively small compared to previous years. During the month of September to October, when the ozone depletion process is at its peak, it was the 7th smallest hole since 1992. An average season sees an incredible 20 million square kilometres of ozone depletion. The teams data even suggests the layer could fully recover by 2066. 

To collect the data the team uses a number of systems. A number of satellites (Aura, NOAA-20, NOAA-21 and Suomi NPP) are used to collect data from orbit. In addition they use weather balloons which are launched from the South Pole Baseline Atmospheric Observatory to directly measure ozone concentrations. 

Geostationary orbits are where telecommunication satellites and other monitoring satellites operate. This image shows one of the NOAA’s Geostationary Operational Environmental Satellites. Image Credit: NOAA.

The measurements are captured as Dobson Units. One Dobson Unit is equivalent to the number of ozone molecules that would be needed to create a layer of pure ozone 0.01 millimetres thick. Of course temperature and pressure would effect this so the measurement is based on a layer at 0 degrees Celsius and 1 atmosphere (the average pressure of atmosphere at surface of Earth.) In 2024, the measurement in October 2024 was 109 Dobson Units in comparison to the lowest ever value of 92 Dobson Units in 2006. 

The Montreal Protocol certainly seems to be making a difference seeing a significant and continuous decline in CFCs. This, along with an infusion of ozone from north of Antarctica have combined to reverse the depletion. 

Source : Ozone Hole Continues Healing in 2024

The post Good News, the Ozone Layer Hole is Continuing to Shrink appeared first on Universe Today.

Categories: Astronomy

How Webb Stays in Focus

Sun, 11/10/2024 - 11:32am

One of the most difficult challenges when assembling a telescope is aligning it to optical precision. If you don’t do it correctly, all your images will be fuzzy. This is particularly challenging when you assemble your telescope in space, as the James Webb Space Telescope (JWST) demonstrates.

Unlike the Hubble Space Telescope, the JWST doesn’t have a single primary mirror. To fit in the launch rocket, it had to be folded, then assembled after launch. For this reason and others, JWST’s primary reflector is a set of 18 hexagonal mirror segments. Each segment is only 1.3-meters wide, but when aligned properly, they act effectively as a single 6.5-meter mirror. It’s an effective way to build a larger space telescope, but it means the mirror assembly has to be focused in space.

To achieve this, each mirror segment has a set of actuators that can shift the segment along six axes of alignment. They are focused using a wavefront phase technique. Since light behaves as a wave, when two beams of light overlap, the waves create an interference pattern. When the mirrors are aligned properly, the waves of light from each mirror segment also align, creating a sharp focus.

The primary mirrors of Hubble and JWST compared. Credit: Wikipedia user Bobarino

For JWST, its Near Infrared Camera (NIRCam) is equipped with a wavefront camera. To align the mirrors, the JWST team points NIRCam at a star, then intentionally moves the mirrors out of alignment. This gives the star a blurred diffraction look. The team then positions the mirrors to focus the star, which brings them into alignment.

This was done to align the mirrors soon after JWST was launched. But due to vibrations and shifts in temperature, the mirror segments slowly drift out of alignment. Not by much, but enough that they need to be realigned occasionally. To keep things proper, the team typically does a wavefront error check every other day. There is also a small camera aimed at the mirror assembly, so the team can take a “selfie” to monitor the condition of the mirrors.

The JWST was designed to maintain a wavefront error of 150 nanometers, but the team has been able to maintain a 65 nanometer error. It’s an astonishingly tight alignment for a space telescope, which allows JWST to capture astounding images of the most distant galaxies in the observable universe.

You can learn more about this technique on the NASA Blog.

The post How Webb Stays in Focus appeared first on Universe Today.

Categories: Astronomy

A Trash Compactor is Going to the Space Station

Sat, 11/09/2024 - 1:44pm

Astronauts on the International Space Station generate their share of garbage, filling up cargo ships that then deorbit and burn up in the atmosphere. Now Sierra Space has won a contract to build a trash compactor for the space station. The device will compact space trash by 75% in volume and allow water and other gases to be extracted for reclamation. The resulting garbage blocks are easily stored and could even be used as radiation shielding on long missions.

Called the Trash Compaction and Processing System (TCPS), plans are to test it aboard the International Space Station in late 2026.

Sierra Space said this technology could be critical for the success of future space exploration — such as long-duration crewed missions to the Moon and Mars — to handle waste management, stowage, and water reclamation.

“Long-term space travel requires the efficient use of every ounce of material and every piece of equipment. Every decision made on a spacecraft can have far-reaching consequences, and waste management becomes a matter of survival and mission integrity in the vacuum of space,” said Sierra Space CEO, Tom Vice, in a press release. “We’re addressing this challenge through technological innovation and commitment to sustainability in every facet of space operations. Efficient, sustainable, and innovative waste disposal is essential for the success of crewed space exploration.”

A sample trash tile, compressed to less than one-eighth of the original trash volume, was produced by the Heat Melt Compactor. Credit: NASA.

NASA said that currently aboard the International Space Station (ISS), common trash such as food packaging, clothing, and wipes are separated into wet and dry trash bags; these bags are stored temporarily before being packed into a spent resupply vehicle, such as the Russian Progress ship or Northrup Grumman’s Cygnus vehicle. When full, these ships undock and burn up during atmospheric re-entry, taking all the trash with it.

However, for missions further out into space trash will have to be managed and disposed of by other methods, such as jettisoning the trash into space – which doesn’t sound like a very eco-friendly idea. Additionally, wet trash contains components that may not be storable for long periods between jettisoning events without endangering the crew. 

Plus, there’s currently no way for any water to be reclaimed from the “wet” waste. The TCPS should be able to recover nearly all the water from the trash for future use.

TCPS is a stand-alone system and only requires access to power, data, and air-cooling interfaces. It is being designed as simple to use.

Sierra Space said the device includes an innovative Catalytic Oxidizer (CatOx) “that processes volatile organic compounds (VOCs) and other gaseous byproducts to maintain a safe and sterile environment in space habitats.” Heat and pressure compacts astronaut trash into solid square tiles that compress to less than one-eighth of the original trash volume. The tiles are easy to store, safe to handle, and have the added — and potentially very important — benefit of providing additional radiation protection.

Sierra Space was originally awarded a contract in 2023, and in January 2024 they completed the initial design and review phase, which was presented to NASA for review. Sierra Space is now finalizing the fabrication, integration, and checkout of the TCPS Ground Unit, which will be used for ground testing in ongoing system evaluations. Based on the success of their design, Sierra Space was now awarded a new contract to build a Flight Unit that will be launched and tested in orbit aboard the space station.

NASA said that once tested on the ISS, the TCPS can be used for exploration missions wherever common spacecraft trash is generated and needs to be managed.

The post A Trash Compactor is Going to the Space Station appeared first on Universe Today.

Categories: Astronomy

Using Light Echoes to Find Black Holes

Sat, 11/09/2024 - 12:39pm

The most amazing thing about light is that it takes time to travel through space. Because of that one simple fact, when we look up at the Universe we see not a snapshot but a history. The photons we capture with our telescopes tell us about their journey. This is particularly true when gravity comes into play, since gravity bends and distorts the path of light. In a recent study, a team shows us how we might use this fact to better study black holes.

Near a black hole, our intuition about the behavior of light breaks down. For example, if we imagine a flash of light in empty space, we understand that the light from that flash expands outward in all directions, like the ripples on a pond. If we observe that flash from far away, we know the light has traveled in a straight line to reach us. This is not true near a black hole.

The gravity of a black hole is so intense that light never travels in a straight line. If there is a flash near a black hole, some of the light will travel directly to us, but some of the light will travel away from us, only to be gravitationally swept around the backside of the black hole to head in our direction. Some light will make a full loop around the black hole before reaching is. Or two loops, or three. With each path, the light travels a different distance to reach us, and therefore reaches us at a different time. Rather than observing a single flash, we wound see echoes of the flash for each journey.

In principle, since each echo is from a different path, the timing of these echoes would allow us to map the region around a black hole more clearly. The echoes would tell us not just the black hole’s mass and rotation; they would also allow us to test the limits of general relativity. The only problem is that with current observations, the echoes wash together in the data. We can’t distinguish different echoes.

This is where this new study comes in. The team propose observing a black hole with two telescopes, one on Earth and one in space. Each telescope would have a slightly different view of the black hole. Through long baseline interferometry the two sets of data could be correlated to distinguish the echoes. In their work the team ran tens of thousands of simulations of light echoes from a supermassive black hole similar to the one in the M87 galaxy. They demonstrated that interferometry could be used to find correlated light echoes.

It would be a challenge to build such an interferometer, but it would be well within our engineering capabilities. Perhaps in the future, we will be able to observe echoes of light to explore black holes and some of the deepest mysteries of gravity.

Reference: Wong, George N., et al. “Measuring Black Hole Light Echoes with Very Long Baseline Interferometry.The Astrophysical Journal Letters 975.2 (2024): L40.

The post Using Light Echoes to Find Black Holes appeared first on Universe Today.

Categories: Astronomy

Launching Mass From the Moon Helped by Lunar Gravity Anomalies

Sat, 11/09/2024 - 11:49am

Placing a mass driver on the Moon has long been a dream of space exploration enthusiasts. It would open up so many possibilities for the exploration of our solar system and the possibility of actually living in space. Gerard O’Neill, in his work on the gigantic cylinders that now bear his name, mentioned using a lunar mass driver as the source of the material to build them. So far, we have yet to see such an engineering wonder in the real world, but as more research is done on the topic, more and more feasible paths seem to be opening up to its potential implementation. 

One recent contribution to that effort is a study by Pekka Janhunen of the Finnish Meteorological Institute and Aurora Propulsion Technologies, a maker of space-based propulsion systems. He details how we can use quirks of lunar gravity to use a mass driver to send passive loads to lunar orbit, where they can then be picked up with active, high-efficiency systems and sent elsewhere in the solar system for processing.

Anomalies in the Moon’s gravitational field have been known for some time. Typically, mission planners view them as a nuisance to be avoided, as they can cause satellite orbits to degrade more quickly than expected by nice, simple models. However, according to Dr. Janhunen, they could also be a help rather than a hindrance.

Mass drivers have been popular in science fiction for some time.
Credit – Isaac Arthur YouTube Channel

Typical models of using lunar mass drivers focus on active or passive payloads sent into lunar orbit. Active payloads require some onboard propulsion system to get them to where they are going. Therefore, these payloads require more active technology and some form of propellant, which diminishes the total amount available for use elsewhere in the solar system.

On the other hand, passive payloads will typically end up in one of two scenarios. Either they make one lunar orbit in about one day and then deorbit back to the lunar surface, or they end up in a highly randomized orbit and essentially end up as lunar space junk. Neither of those solutions would be sustainable for significant mass movement off the lunar surface.

Dr. Janhunen may have found a solution, though. He studied the known lunar gravitational anomalies found by GRAIL. This satellite mapped the Moon’s gravity in great detail and found several places on the lunar surface where a mass driver could potentially launch a passive payload into an orbit that would last up to nine days. These places are along the sides of mountains, and three of them are on the side of the lunar surface facing Earth. Importantly, all of them have their gravitational quirks.

The Artemis missions might be our best chance in the coming decades to build a mass driver on the Moon – Fraser discusses their details here.

More time in orbit would mean more time for an active tug to grab hold of the passive lunar payload and take it to a processing station, such as a space station at the L5 point between Earth and the Moon. This active tug could be reusable, have a highly efficient electrical propulsion system developed and built on Earth, and only need to be launched once.

All that would be required for the system to work would be a mass driver that could accelerate a payload up to a lunar orbital velocity of about 1.7 km/s. That is well within our capabilities to build with existing technologies, but it would require a massive engineering effort far beyond anything we have built-in space so far. However, every study that shows a potential increased benefit or lowered cost to eventually exploiting the resources of our nearest neighbor to expand our reach into the solar system takes us one step closer to making that a reality.

Learn More:
P Janhunen – Launching mass from the Moon helped by lunar gravity anomalies
UT – Moonbase by 2022 For $10 Billion, Says NASA
UT – NASA Wants to Move Heavy Cargo on the Moon
NSS – L5 News: Mass Driver Update

Lead Image:
DALL-E illustration of a lunar electromagnetic launcher

The post Launching Mass From the Moon Helped by Lunar Gravity Anomalies appeared first on Universe Today.

Categories: Astronomy

A Star Disappeared in Andromeda, Replaced by a Black Hole

Fri, 11/08/2024 - 4:39pm

Massive stars about eight times more massive than the Sun explode as supernovae at the end of their lives. The explosions, which leave behind a black hole or a neutron star, are so energetic they can outshine their host galaxies for months. However, astronomers appear to have spotted a massive star that skipped the explosion and turned directly into a black hole.

Stars are balancing acts between the outward force of fusion and the inward force of their own gravity. When a massive star enters its last evolutionary stages, it begins to run out of hydrogen, and its fusion weakens. The outward force from its fusion can no longer counteract the star’s powerful gravity, and the star collapses in on itself. The result is a supernova explosion, a calamitous event that destroys the star and leaves behind a black hole or a neutron star.

However, it appears that sometimes these stars fail to explode as supernovae and instead turn directly into black holes.

New research shows how one massive, hydrogen-depleted supergiant star in the Andromeda galaxy (M31) failed to detonate as a supernova. The research is “The disappearance of a massive star marking the birth of a black hole in M31.” The lead author is Kishalay De, a postdoctoral scholar at the Kavli Institute for Astrophysics and Space Research at MIT.

These types of supernovae are called core-collapse supernovae, also known as Type II. They’re relatively rare, with one occurring about every one hundred years in the Milky Way. Scientists are interested in supernovae because they are responsible for creating many of the heavy elements, and their shock waves can trigger star formation. They also create cosmic rays that can reach Earth.

This new research shows that we may not understand supernovae as well as we thought.

Artist’s impression of a Type II supernova explosion. These supernovae explode when a massive star nears the end of its life and leaves behind either a black hole or a neutron star. But sometimes, the supernova fails to explode and collapses directly into a black hole. Image Credit: ESO

The star in question is named M31-2014-DS1. Astronomers noticed it brightening in mid-infrared (MIR) in 2014. For one thousand days, its luminosity was constant. Then, for another thousand days between 2016 and 2019, it faded dramatically. It’s a variable star, but that can’t explain these fluctuations. In 2023, it was undetected in deep optical and near-IR (NIR) imaging observations.

The researchers say that the star was born with an initial mass of about 20 stellar masses and reached its terminal nuclear-burning phase with about 6.7 stellar masses. Their observations suggest that the star is surrounded by a recently ejected dust shell, in accordance with a supernova explosion, but there’s no evidence of an optical outburst.

“The dramatic and sustained fading of M31-2014-DS1 is exceptional in the landscape of variability in massive, evolved stars,” the authors write. “The sudden decline of luminosity in M31-2014-DS1 points to the cessation of nuclear burning together with a subsequent shock that fails to overcome the infalling material.” A supernova explosion is so powerful that it completely overcomes infalling material.

“Lacking any evidence for a luminous outburst at such proximity, the observations of M31-2014-DS1 bespeak signatures of a ‘failed’ SN that leads to the collapse of the stellar core,” the authors explain.

What could make a star fail to explode as a supernova, even if it’s the right mass to explode?

Supernovae are complex events. The density inside a collapsing core is so extreme that electrons are forced to combine with protons, creating both neutrons and neutrinos. This process is called neutronization, and it creates a powerful burst of neutrinos that carries about 10% of the star’s rest mass energy. The outburst is called a neutrino shock.

Neutrinos get their name from the fact that they’re electrically neutral and seldom interact with regular matter. Every second, about 400 billion neutrinos from our Sun pass right through every person on Earth. But in a dense stellar core, the neutrino density is so extreme that some of them deposit their energy into the surrounding stellar material. This heats the material, which generates a shock wave.

The neutrino shock always stalls, but sometimes it revives. When it revives, it drives an explosion and expels the outer layer of the supernova. If it’s not revived, the shock wave fails, and the star collapses and forms a black hole.

This image illustrates how the neutrino shock wave can stall, leading to a black hole without a supernova explosion. A shows the initial shock wave with cyan lines representing neutrinos being emitted and the red circle representing the shock wave propagating outward. B shows the neutrino shock stalling, with white arrows representing infalling matter. The outer layers fall inward, and the neutrino heating isn’t powerful enough to revive the shock. C shows the failed shock dissipating as a dotted red line and the stronger white arrows represent the collapse accelerating. The outer layers are falling in rapidly, and the core is becoming more compact. D shows the black hole forming, with the blue circle representing the event horizon and the remaining material forming an accretion disk. (Credit: Original illustration created for this article.)

In M31-2014-DS1, the neutrino shock was not revived. The researchers were able to constrain the amount of material ejected by the star, and it was far below what a supernovae would eject. “These constraints imply that the majority of stellar material (?5 solar masses) collapsed into the core, exceeding the maximum mass of a neutron star (NS) and forming a BH,” they conclude. About 98% of the star’s mass collapsed and created a black hole with about 6.5 solar masses.

M31-2014-DS1 isn’t the only failed supernova, or candidate failed supernova, that astronomers have found. They’re difficult to spot because they’re characterized by what doesn’t happen rather than what does. A supernova is hard to miss because it’s so bright and appears in the sky suddenly. Ancient astronomers recorded several of them.

In 2009, astronomers discovered the only other confirmed failed supernova. It was a supergiant red star in NGC 6946, the “Fireworks Galaxy.” It’s named N6946-BH1 and has about 25 solar masses. After disappearing from view, it left only a faint infrared glow. In 2009, its luminosity increased to a million solar luminosities, but by 2015, it had disappeared in optical light.

A survey with the Large Binocular Telescope monitored 27 nearby galaxies, looking for disappearing massive stars. The results suggest that between 20% and 30% of massive stars can end their lives as failed supernovae. However, M31-2014-DS1 and N6946-BH1 are the only confirmed observations.

The post A Star Disappeared in Andromeda, Replaced by a Black Hole appeared first on Universe Today.

Categories: Astronomy

eROSITA All-Sky Survey Takes the Local Hot Bubble’s Temperature

Fri, 11/08/2024 - 3:37pm

About half a century ago, astronomers theorized that the Solar System is situated in a low-density hot gas environment. This hot gas emits soft X-rays that displace the dust in the local interstellar medium (ISM), creating what is known as the Local Hot Bubble (LHB). This theory arose to explain the ubiquitous soft X-ray background (below 0.2 keV) and the lack of dust in our cosmic neighborhood. This theory has faced some challenges over the years, including the discovery that solar wind and neutral atoms interact with the heliosphere, leading to similar emissions of soft X-rays.

Thanks to new research by an international team of scientists led by the Max Planck Institute for Extraterrestrial Physics (MPE), we now have a 3D model of the hot gas in the Solar System’s neighborhood. Using data obtained by the eROSITA All-Sky Survey (eRASS1), they detected large-scale temperature differences in the LHBT that indicate that the LHB must exist, and both it and solar wind interaction contribute to the soft X-ray background. They also revealed an interstellar tunnel that could possibly link the LHB to a larger “superbubble.”

The research was led by Michael C. H. Yeung, a PhD student at the MPE who specializes in the study of high-energy astrophysics. He was joined by colleagues from the MPE, the INAF-Osservatorio Astronomico di Brera, the University of Science and Technology of China, and the Dr. Karl Remeis Observatory. The paper that details their findings, “The SRG/eROSITA diffuse soft X-ray background,” was published on October 29th, 2024, by the journal Astronomy & Astrophysics.

This image shows half of the X-ray sky projected onto a circle with the center of the Milky Way on the left and the galactic plane running horizontally. Credit ©: MPE/J. Sanders/eROSITA consortium

The eROSITA telescope was launched in 2019 as part of the Russian–German Spektr-RG space observatory. It is the first X-ray observatory to observe the Universe beyond Earth’s geocorona, the outermost region of the Earth’s atmosphere (aka. the exosphere), to avoid contamination by the latter’s high-ultraviolet light. In addition, the eROSITA All-Sky Survey (eRASS1) was timed to coincide with the solar minimum, thus reducing contamination by solar wind charge exchanges.

For their study, the team combined data from the eRASS1 with data from eROSITA’s predecessor, the X-ray telescope ROSAT (short for Röntgensatellit). Also built by the MPE, this telescope complements the eROSITA spectra by detecting X-rays with energies lower than 0.2 keV. The team focused on the LHB located in the western Galactic hemisphere, dividing it into about 2000 regions and analyzing the spectra from each. Their analysis showed a clear temperature difference between the parts of the LHB oriented towards Galactic South (0.12 keV; 1.4 MK) and Galactic North (0.10 keV; 1.2 MK).

According to the authors, this difference could have been caused by supernova explosions that expanded and reheated the Galactic South portion of the LHB in the past few million years. Yeung explained in an MPE press release: “In other words, the eRASS1 data released to the public this year provides the cleanest view of the X-ray sky to date, making it the perfect instrument for studying the LHB.”

In addition to obtaining temperature data from the diffuse X-ray background spectra information, the combined data also provided a 3D structure of the hot gas. In a previous study, Yeung and his colleagues examined eRASS1 spectra data from almost all directions in the western Galactic hemisphere. They concluded that the density of the hot gas in the LHB is relatively uniform. Relying on this previous work, the team generated a new 3D model of the LHB from the measured intensity of X-ray emissions.

A 3D interactive view of the LHB and the solar neighborhood, Credit: MPE

This model shows that the LHB extends farther toward the Galactic poles than expected since the hot gas tends to follow the path of least resistance (away from the Galactic disc). Michael Freyberg, a core author of this work, was a part of the pioneering work in the ROSAT era three decades ago. As he explained:

“This is not surprising, as was already found by the ROSAT survey. What we didn’t know was the existence of an interstellar tunnel towards Centaurus, which carves a gap in the cooler interstellar medium (ISM). This region stands out in stark relief thanks to the much-improved sensitivity of eROSITA and a vastly different surveying strategy compared to ROSAT.”

These latest results suggest the Centaurus tunnel may be a local example of a wider hot ISM network sustained by supernovae and solar wind-ISM interaction across the Galaxy. While astronomers have theorized the existence of the Centaurus tunnel since the 1970s, it has remained difficult to prove until now. The team also compiled a list of known supernova remnants, superbubbles, and dust and used these to create a 3D model of the Solar System’s surroundings. The new model allows astronomers to better understand the key features in the representation.

These include the Canis Major tunnel, which may connect the LHB to the Gum Nebula (the red globe) or the long grey superbubble (GSH238+00+09). Dense molecular clouds, represented in orange, are shown near the surface of the LHB in the direction of the Galactic Center (GC). Recent work suggests these clouds are moving away from the Solar System and likely formed from the condensation of materials swept up during the early formation of the LHB. Said Gabriele Ponti, a co-author of this work:

“Another interesting fact is that the Sun must have entered the LHB a few million years ago, a short time compared to the age of the Sun. It is purely coincidental that the Sun seems to occupy a relatively central position in the LHB as we continuously move through the Milky Way.”

Further Reading: MPE, Astronomy & Astrophysics

The post eROSITA All-Sky Survey Takes the Local Hot Bubble’s Temperature appeared first on Universe Today.

Categories: Astronomy

An Explanation for Rogue Planets. They Were Eroded Down by Hot Stars

Fri, 11/08/2024 - 11:06am

The dividing line between stars and planets is that stars have enough mass to fuse hydrogen into helium to produce their own light, while planets aren’t massive enough to produce core fusion. It’s generally a good way to divide them, except for brown dwarfs. These are bodies with a mass of about 15–80 Jupiters, so they are large enough to fuse deuterium but can’t generate helium. Another way to distinguish planets and stars is how they form. Stars form by the gravitational collapse of gas and dust within a molecular cloud, which allows them to gather mass on a short cosmic timescale. Planets, on the other hand, form by the gradual accumulation of gas and dust within the accretion disk of a young star. But again, that line becomes fuzzy for brown dwarfs.

The problem arises in that, if brown dwarfs form within a molecular cloud like stars, they aren’t massive enough to form quickly. If a cloud of gas and dust has enough mass to collapse under its own weight, it has enough mass to form a full star. But if brown dwarfs form like planets, they would have to accumulate mass incredibly quickly. Simulations of planet formation show it is difficult for a planet to form with a mass of more than a few Jupiters. So what gives? The answer may lie in what are known as Jupiter-mass binary objects, or JuMBOs.

The Orion nebula is a stellar nursery. Credit: NASA, ESA, M. Robberto

JuMBOs are binary objects where each component has a mass between 0.7 and 13 Jupiter masses. If they form like planets, they should be extremely rare, and if they form like binary stars, they should have more mass. Recent observations by the JWST of the Orion nebula cluster discovered 540 free-floating Jupiter mass objects, so-called rogue planets. This was surprising in and of itself, but more surprising was the fact that 42 of them were JuMBOs. Far from being rare, they make up nearly 8% of these rogue objects. So how do they form?

One clue lies in their orbital separation. The components of JuMBOs are most commonly separated by a distance of 28–384 AU. This is similar to that of binary stars with components around the mass of the Sun, which typically are in a range of 50–300 AU. Binary stars are extremely common. More common than single stars like the Sun. The environment of stellar nurseries, such as the Orion nebula, is also extremely intense. Massive stars that form first can blast nearby regions with ionizing radiation. Given how common JuMBOs are, it is likely they began as binary stars, only to have much of their masses blasted away by photo-erosion. Rather than being binary planets, they are the failed remnants of binary stars.

This could also explain why so many rogue planets have super-Jupiter masses. The same intense light that would cause photo-erosion would also tend to push them out of star systems.

Reference: Diamond, Jessica L., and Richard J. Parker. “Formation of Jupiter-Mass Binary Objects through photoerosion of fragmenting cores.” The Astrophysical Journal 975.2 (2024): 204.

The post An Explanation for Rogue Planets. They Were Eroded Down by Hot Stars appeared first on Universe Today.

Categories: Astronomy

CODEX Coronagraph Heads to the ISS on Cargo Dragon

Fri, 11/08/2024 - 10:00am

A new space-based telescope aims to address a key solar mystery.

A new experiment will explore a region of the Sun that’s tough to see from the surface of the Earth. The solar corona—the elusive, pearly white region of the solar atmosphere seen briefly during a total solar eclipse—is generally swamped out by the dazzling Sun. Now, the Coronal Diagnostic Experiment (CODEX) will use a coronagraph to create an ‘artificial eclipse’ in order to explore the poorly understood middle corona region of the solar atmosphere.

CODEX launched as part of the cargo manifest on SpaceX’s Cargo Dragon this week, on mission CRS-31. CRS-31 arrived at the ISS and docked at the Harmony forward port of the station on November 5th.

CODEX is a partnership between NASA’s Goddard Spaceflight Center, Italy’s National Institute for Astrophysics (INAE) and KASI (Korea Astronomy and Space Science Institute). Technical expertise for the project was provided by the U.S. Naval Research Laboratory (NRL).

CODEX will be mounted on the EXPRESS (Expedite the Processing of Experiments to the Space Station) Logistics Carrier Site 3 (ELC-3) on the ISS.

An animation of CODEX on the ISS. NASA Why Use Coronagraphs

Coronagraphs work by blocking out the Sun with an occulting disk. The disk used in CODEX is about as wide as an orange. Though coronagraphs can work on Earth, placing them in space is an easy way to eliminate unwanted light due to atmospheric scattering.

The solar corona, as imaged by the High Altitude Observatory’s coronagraph. UCAR/NCAR.

Targeting the middle region of the corona is crucial, as it’s thought to be the source of the solar wind. But what heats this region to temperatures actually hotter than the surface below? This rise is in the order of a million degrees, versus 6000 degrees Celsius for the solar photosphere. The same unknown process accelerates particles to tremendous speeds of over a million kilometers an hour.

CODEX seeks to address this dilemma, and will measure Doppler shifts in charged particles at four filtered wavelengths. The instrument will need to center and track the Sun from its perch on the exterior of the ISS. To this end, this must occur while speeding around the Earth once every 90 minutes. CODEX will be able to see the Sun roughly half of the time, though seasons near either solstice will allow for near-continuous views.

CODEX will work with NASA’s Parker Solar Probe and ESA’s Solar Orbiter (SolO) in studying this coronal heating dilemma. In addition, it will also join the Solar Heliospheric (SOHO’s) LASCO C2 and C3 coronagraph in space. Another new coronagraph instrument in space is the National Oceanic Atmospheric Administration’s CCOR-1 (Compact Coronagraph) aboard the GOES-19 satellite in geosynchronous orbit.

A Solar Wind Riddle

“CODEX measures the plasma’s temperature, speed and density around the whole corona between 3 and 10 solar radii, and will measure how those parameters evolve in time, providing new constraints on all theories of coronal heating,” Niicholeen Viall (GFSC-Solar Physics Laboratory) told Universe Today. “Parker Solar Probe measures these plasma parameters in the upper corona (getting as close as 10 solar radii) in great detail, but it makes those measurements in situ (from one one location in space and time) and only briefly that close to the Sun.”

The CODEX team with the instrument, ahead of launch. Credit: CODEX/NASA.

The goal of CODEX is to provide a holistic view of solar wind activity. “In contrast, CODEX provides a global view and context of these plasma parameters and their evolution,” says Viall. “Additionally, CODEX extends the measurements much closer to the Sun than Parker Solar Probe (PSP), linking the detailed measurements made at PSP at 10 solar radii through the middle corona, down to ~3 solar radii, closer to their source. This is important because most of the coronal heating has already taken place by 10 solar radii, where PSP measures.”

A Dual Mystery

Two theories vie to explain the solar heating mystery. A first says that tangled magnetic fields are converted into thermal power. These are in turn fed into the corona as bursts of energy. Another says that oscillations known as Alfvén waves inject energy in a sort of feedback loop in the lower corona.

“Solar Orbiter has (an) EUV (Extreme ultraviolet) and white light imager that could be used to connect the CODEX measurements to their sources on the Sun,” says Viall.

Understanding this region and the source of the solar wind is crucial to predicting space weather. This is especially vital when the Sun sends powerful corona mass ejections our way. Not only can these spark low latitude aurorae, but these can also impact communications and pose a hazard to satellites and astronauts in space.

“CODEX is similar to all coronagraphs, in that they block light out from the photosphere to see the much fainter corona.” Says Viall. CODEX’s field of view has overlap with, but is different than SOHO’s coronagraphs and CCOR. The largest difference though, is that CODEX has special filters that can provide the temperature and speed of the solar wind, in addition to the density measurements that white light coronagraphs always make.”

The Past (and Future) of Coronagraphs in Space

Furthermore, there’s also a history of coronagraphs aboard space stations. This goes all the way back to the white-light coronagraph aboard Skylab in the early 1970s.

Looking to the future, more coronagraphs are headed space-ward. ESA’s solar-observing Proba-3 launches at the end of November. Proba-3 will feature the first free-flying occulting disk as part of the mission. PUNCH (the Polarimeter to UNify the Corona and Heliosphere) will feature four micro-sat orbiters. The mission will rideshare launch with NASA’s SPHEREx mission early next year.

“PUNCH is a white light coronagraph and set of heliospheric imagers that together image from six solar radii out through the inner heliosphere.” Says Viall. PUNCH will be able to watch the structures that CODEX identifies as they as they evolve and are modulated father out in the heliosphere.”

Fianlly, astronomers can also use coronagraph-style instruments to image exoplanets directly. The Nancy Grace Roman Space telescope (set to launch in 2027) will feature one such instrument.

It will be exciting to see CODEX in action, as it probes the mysteries of the solar wind.

The post CODEX Coronagraph Heads to the ISS on Cargo Dragon appeared first on Universe Today.

Categories: Astronomy

Flowing Martian Water was Protected by Sheets of Carbon Dioxide

Thu, 11/07/2024 - 5:44pm

Mars’ ancient climate is one of our Solar System’s most perplexing mysteries. The planet was once wet and warm; now it’s dry and cold. Whatever befell the planet, it didn’t happen all at once.

New research shows that on ancient cold Mars, sheets of frozen carbon dioxide allowed rivers to flow and a sea the size of the Mediterranean to exist.

Mars’ climatic change from warm and wet to cold and dry wasn’t abrupt. There was no catastrophic impact or other triggering event. Throughout its gradual shift, there were different climatic episodes.

The planet’s surface is characterized by features that indicate water’s presence. River channels, impact craters, and basins that were once paleolakes illustrate Mars’ complex climatic history. Mars is much different from Earth, but they both follow the same set of natural rules.

In Earth’s frigid climates, rivers can flow underneath thick, protective ice sheets. New research shows that a similar thing happened on Mars. The research is published in JGR Planets and is titled “Massive Ice Sheet Basal Melting Triggered by Atmospheric Collapse on Mars, Leading to Formation of an Overtopped, Ice-Covered Argyre Basin Paleolake Fed by 1,000-km Rivers.” The lead author is Peter Buhler, a Research Scientist at the Planetary Science Institute.

The research examines a period about 3.6 billion years ago when Mars was likely transitioning from the Noachian Period to the Hesperian Period. At that time, most of the surface water was frozen into large ice sheets in Mars’ southern region, according to the research. The planet’s CO2 atmosphere suffered periodic collapses, and sublimated out of the atmosphere. Those collapses formed a layer of CO2 650 meters (0.4 miles) thick that created a massive ice cap over the South Pole. It insulated the 2.5-mile-thick (4 km) layer of frozen water that made up the ice sheets.

This simple schematic from the research shows how the proposed model works. When the CO2 atmosphere collapses and sublimates, it forms an insulating layer over the frozen water in Mars’ southern polar regions. The meltwater is released and flows across the surface, insulated by a layer of frozen water. Image Credit: Buhler, 2024.

Buhler modelled how the CO2 cap acted as a thermal blanket and showed that it released massive amounts of meltwater from the frozen pole. This water flowed down rivers, with the top layers freezing and insulating the liquid water underneath.

“You now have the cap on top, a saturated water table underneath and permafrost on the sides,” Buhler said. “The only way left for the water to go is through the interface between the ice sheet and the rock underneath it. That’s why on Earth you see rivers come out from underneath glaciers instead of just draining into the ground.”

According to Buhler’s work, enough water was liberated to fill the Argyre Basin.

The Argyre Basin is one of the largest impact basins on the planet, measuring roughly 1800 km (1100 mi) in diameter. This massive impact basin was formed billions of years ago by a comet or asteroid striking Mars. It drops about 5.2 km (3.2 mi) below the surrounding plains, making it the second deepest basin on Mars. Scientists have long thought that the basin once held water—as much as the Mediterranean Sea—and Buhler’s work shows how it may have filled.

“Eskers are evidence that at some point there was subglacial melt on Mars, and that’s a big mystery,” Buhler said. Eskers are long stratified ridges of sand and gravel deposited by meltwater streams that flow under glaciers. They’re common on Earth, where glaciers once covered the surface. Mars’ eskers support the idea that the same thing happened on that planet.

These are eskers in western Sweden. They were created by water flowing under a glacier. When the glacier retreated, they were left as evidence. The same likely happened on Mars. Image Credit: By Hanna Lokrantz – https://www.flickr.com/photos/geologicalsurveyofsweden/6853882122/in/album-72157625612122901/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=42848874

The subglacial rivers would have flowed underneath the ice, where they were insulated from the cold. When they exited the glacier, they would have oozed along until a thick enough ice cap formed to insulate them. Buhler says that the ice would’ve grown until it was hundreds of meters thick, and the water flowing under the ice caps would’ve been several feet deep. The water would’ve carved out river channels thousands of miles long, and there are several of those that go from the polar cap to the Argyre Basin.

This figure shows the polar cap, the Argyre Crater, and the long sinuous channels that carried meltwater from the cap to the basin. Image Credit: Buhler 2024.

“People have been trying to discover processes that could make that happen, but nothing really worked,” Buhler said. “The current best hypothesis is that there was some unspecified global warming event, but that was an unsatisfying answer to me, because we don’t know what would have caused that warming. This model explains eskers without invoking climatic warming.”

Argyre Basin is massive and voluminous, and proposed explanations for how it was filled with water were left wanting. It has approximately the same volume as the Mediterranean Sea. Buhler’s model shows that it took about ten thousand years for the basin to fill, and after it filled, the water emptied into plains about 8,000 km (5,000 miles) away.

This process happened repeatedly over a one-hundred-million-year era, with each event separated by millions of years.

“This is the first model that produces enough water to overtop Argyre, consistent with decades-old geologic observations,” Buhler said. “It’s also likely that the meltwater, once downstream, sublimated back into the atmosphere before being returned to the south polar cap, perpetuating a pole-to-equator hydrologic cycle that may have played an important role in Mars’ enigmatic pulse of late-stage hydrologic activity. What’s more, it does not require late-stage warming to explain it.”

Buhler’s work is supported by other research. “Previous literature supports the presence of a ~0.6 bar (atmospheric) CO2 inventory, as utilized in the model, near the Noachian-Hesperian boundary,” he writes in his research. The history of Mars’ atmospheric pressure is backed up by cosmochemistry, mineralogy, atmosphere and meteorite trapped-gas isotopic ratios, geomorphology, and extrapolations of modern-day atmospheric escape.

“Thus, there is strong evidence that Mars had a sufficiently large mobile CO2 reservoir to drive the atmospheric-collapse-driven melting scenario described in this manuscript, with collapse occurring at a time commensurate with Valley Network formation during Mars’ intense, Late Noachian/Early Hesperian terminal pulse of intense fluvial activity,” Buhler writes.

That period of Mars’ history stands out as its own distinct phase of geological activity, whereas changes were more gradual in the earlier Noachian Period. The Late Noachian/Early Hesperian saw intense valley network formation. Many of these valleys are deeply carved into the landscape, often cutting through older geological features. That suggests that the water flow was powerful and erosive. This fluvial activity also created large deposits of sediment, like the ones NASA’s Perseverance Rover is exploring in Jezero Crater.

Jezero Crater on Mars. Scientists think that the sediments in the crater may be one km deep. Image Credit: NASA/JPL-Caltech/ASU

Buhler’s research is partly based on modern-day observations of Mars’ atmospheric CO2 and its cycles. Much of it is actually frozen and bound to the regolith. Mars’ rotational tilt shifts over a 100,000-year timeline. When it’s closer to straight up and down, the Sun hits the equator, and CO2 is released from the regolith into the atmosphere. It eventually reaches the poles, where it’s frozen into the caps.

When Mars is tilted, the poles are warmed, and the CO2 sublimates and is released into the atmosphere again. It eventually reaches the now-cooler regolith, which absorbs it. “The atmosphere is mostly just along for the ride,” Buhler said. “It acts as a conduit for the real action, which is the exchange between the regolith and the southern polar ice cap, even today.”

Buhler is still working with his model and intends to continue testing it more rigorously. If it successfully withstands more testing, our understanding of Mars will take a big leap forward.

The post Flowing Martian Water was Protected by Sheets of Carbon Dioxide appeared first on Universe Today.

Categories: Astronomy

Japan Launches the First Wooden Satellite to Space

Thu, 11/07/2024 - 4:56pm

Space debris, which consists of pieces of spent rocket stages, satellites, and other objects launched into orbit since 1957 – is a growing concern. According to the ESA Space Debris Office, there are roughly 40,500 objects in LEO larger than 10 cm (3.9 inches) in diameter, an additional 1.1 million objects measuring 1 and 10 cm (0.39 to 3.9 inches) in diameter, and 130 million objects 1 mm to 1 cm (0.039 to 0.39 inches). The situation is projected to worsen as commercial space companies continue to deploy “mega-constellations” of satellites for research, telecommunications, and broadband internet services.

To address this situation, researchers from the University of Kyoto have developed the world’s first wooden satellite. Except for its electronic components, this small satellite (LingoSat) is manufactured from magnolia wood. According to a statement issued on Tuesday, November 5th, by the University of Kyoto’s Human Spaceology Center, the wooden satellite was successfully launched into orbit atop a SpaceX Falcon 9 rocket from NASA’s Kennedy Space Center in Florida. This satellite, the first in a planned series, is designed to mitigate space debris and prevent what is known as “Kessler Syndrome.”

In 1978, NASA scientists Donald J. Kessler and Burton G. Cour-Palais proposed a scenario in which the density of objects in Low Earth Orbit (LEO) would become high enough that collisions between objects would cause a cascade effect. This would lead to a vicious cycle in which collisions caused debris, which would make further collisions more likely, leading to more collisions and more debris (and so on). For decades, astronomers and space agencies have feared that we are approaching this point or will be shortly.

Animation of Kyoto University’s prototype wooden satellite in space. Credit: Kyoto University

By manufacturing satellites out of wood, the University of Kyoto scientists expect they will burn up when they re-enter Earth’s atmosphere at the end of their service. This will prevent potentially harmful metal particles from being generated when a retired satellite returns to Earth. The small satellite measures just 10 cm (4 in) on a side and weighs only 900 grams, making it one of the lightest satellites ever sent to space. Its name comes from the Latin word for wood (“lingo”) and CubeSat, a class of small satellites with a form factor of 10 cm cubes.

Before launch, the science team installed LingoSat in a special container prepared by the Japan Aerospace Exploration Agency (JAXA). According to a spokesperson for Sumitomo Forestry, LignoSat’s co-developer, the satellite will “arrive at the ISS soon and will be released to outer space about a month later.”

Once the satellite reaches the ISS, it will dock via the Kibo Japanese Experiment Module (JEM) before deployment. It will then spend the next six months in space, and data will be sent from the satellite to researchers who will monitor it for signs of strain. Ultimately, the goal is to determine if wooden satellites can withstand the extreme temperature changes and conditions in space. A second satellite, LingoSat 2, is a double-unit CubeSat currently scheduled for launch in 2026.

Further Reading: The Guardian

The post Japan Launches the First Wooden Satellite to Space appeared first on Universe Today.

Categories: Astronomy

You Can Build a Home Radio Telescope to Detect Clouds of Hydrogen in the Milky Way

Thu, 11/07/2024 - 1:23pm

If I ask you to picture a radio telescope, you probably imagine a large dish pointing to the sky, or even an array of dish antennas such as the Very Large Array. What you likely don’t imagine is something that resembles a TV dish in your neighbor’s backyard. With modern electronics, it is relatively easy to build your own radio telescope. To understand out how it can be done, check out a recent paper by Jack Phelps.

He outlines in detail how you can construct a small radio telescope with a 1-meter satellite dish, a Raspberry Pi, and some other basic electronics such as analog-to-digital converters. It’s a fascinating read, and one of the most interesting features is that his design is tuned to a frequency of 1420.405 MHz. This is the frequency emitted by neutral hydrogen. Since it has a wavelength of about 21 centimeters, the hydrogen emission line is sometimes called the 21-cm line. Neutral hydrogen comprises the bulk of matter in the Universe. The 21-cm emission isn’t particularly bright, but because there is so much hydrogen out there, the signal is easy to detect. And wherever there is matter, so too is the hydrogen line.

Observations of hydrogen in the Milky Way (red dots). Credit: Jack Phelps

The emission is caused by a spin flip of the hydrogen’s electron. It’s a hyperfine emission, which means the line is very sharp. If you see the line shifted a bit, you know that’s because of relative motion. Astronomers have used the line to map the distribution of matter in the Milky Way, and have even used it to measure our galaxy’s rotation. Early observations of the line pointed to the existence of dark matter in our galaxy. And now you can do it at home.

There are other radio objects you can observe in the sky. The Sun is a popular target given its strong radio signal. Jupiter is another somewhat bright source. It’s a cool hobby. Even if you don’t intend to build a radio telescope of you’re own, it’s worth checking out the paper just to see how accessible radio astronomy has become.

Reference: J. Phelps. “Galactic Neutral Hydrogen Structures Spectroscopy and Kinematics: Designing a Home Radio Telescope for 21 cm Emission.” arXiv preprint arXiv:2411.00057 (2024).

The post You Can Build a Home Radio Telescope to Detect Clouds of Hydrogen in the Milky Way appeared first on Universe Today.

Categories: Astronomy

A Space Walking Robot Could Build a Giant Telescope in Space

Wed, 11/06/2024 - 1:28pm

The Hubble Space Telescope was carried to space inside the space shuttle Discovery and then released into low-Earth orbit. The James Webb Space Telescope was squeezed inside the nose cone of an Ariane 5 rocket and then launched. It deployed its mirror and shade on its way to its home at the Sun-Earth L2 Lagrange point.

However, the ISS was assembled in space with components launched at different times. Could it be a model for building future space telescopes and other space facilities?

The Universe has a lot of dark corners that need to be peered into. That’s why we’re driven to build more powerful telescopes, which means larger mirrors. However, it becomes increasingly difficult to launch them into space inside rocket nose cones. Since we don’t have space shuttles anymore, this leads us to a natural conclusion: assemble our space telescopes in space using powerful robots.

New research in the journal Acta Astronautica examines the viability of using walking robots to build space telescopes.

The research is “The new era of walking manipulators in space: Feasibility and operational assessment of assembling a 25 m Large Aperture Space Telescope in orbit.” The lead author is Manu Nair from the Lincoln Centre for Autonomous Systems in the UK.

“This research is timely given the constant clamour for high-resolution astronomy and Earth observation within the space community and serves as a baseline for future missions with telescopes of much larger aperture, missions requiring assembly of space stations, and solar-power generation satellites, to list a few,” the authors write.

While the Canadarm and the European Robotic Arm on the ISS have proven capable and effective, they have limitations. They’re remotely operated by astronauts and have only limited walking abilities.

Recognizing the need for more capable space telescopes, space stations, and other infrastructure, Nair and his co-authors are developing a concept for an improved walking robot. “To address the limitations of conventional walking manipulators, this paper presents a novel seven-degrees-of-freedom dexterous End-Over-End Walking Robot (E-Walker) for future In-Space Assembly and Manufacturing (ISAM) missions,” they write.

An illustration of the E-walker. The robot has seven degrees of freedom, meaning it has seven independent motions. Image Credit: Mini Rai, University of Lincoln.

Robotics, Automation, and Autonomous Systems (RAAS) will play a big role in the future of space telescopes and other infrastructure. These systems require dexterity, a high degree of autonomy, redundancy, and modularity. A lot of work remains to create RAAS that can operate in the harsh environment of space. The E-Walker is a concept that aims to fulfill some of these requirements.

The authors point out how robots are being used in unique industrial settings here on Earth. The Joint European Torus is being decommissioned, and a Boston Dynamics Spot quadruped robot is being used to test its effectiveness. It moved around the JET autonomously during a 35-day trial, mapping the facility and taking sensor readings, all while avoiding obstacles and personnel.

The Boston Dynamics Spot robot spent 35 days working autonomously on the Joint European Torus. Here, Spot is inspecting wires and pipes at the facility at Culham, near Oxford (Image Credit: UKAEA)

Using Spot during an industrial shutdown shows the potential of autonomous robots. However, robots still have a long way to go before they can build a space telescope. The authors’ case study could be an important initial step.

Their case study is the hypothetical LAST, a Large Aperture Space Telescope with a wide-field, 25-meter primary mirror that operates in visible light. LAST is the backdrop for the researchers’ feasibility study.

LAST’s primary mirror would be modular, and its piece would have connector ports and interfaces for construction and for data, power, and thermal transfer. This type of modularity would make it easier for autonomous systems to assemble the telescope.

LAST would build its mirror using Primary Mirror Units (PMUs). Nineteen PMUs make up a Primary Mirror Segment (PMS), and 18 PMSs would constitute LAST’s 25-meter primary mirror. A total of 342 PMUs would be needed to complete the telescope.

This figure shows how LAST would be constructed. 342 Primary Mirror Units make up the 18 Primary Mirror Segments, adding up to a 25-meter primary mirror. (b) shows how the center of each PMU is found, and (c) shows a PMU and its connectors. Image Credit: Nair et al. 2024.

The E-Walker concept would also have two spacecraft: a Base Spacecraft (BSC) and a Storage Spacecraft (SSC). The BSC would act as a kind of mothership, sending required commands to the E-Walker, monitoring its operational state, and ensuring that things go smoothly. The SSC would hold all of the PMUs in a stacked arrangement, and the E-Walker would retrieve one at a time.

The researchers developed eleven different Concept of Operations (ConOps) for the LAST mission. Some of the ConOps included multiple E-walkers working cooperatively. The goals are to optimize task-sharing, prioritize ground-lifting mass, and simplify control and motion planning. “The above-mentioned eleven mission scenarios are studied further to choose the most feasible ConOps for the assembly of the 25m LAST,” they explain.

This figure summarizes the 11 mission ConOps developed for LAST. (a) shows assembly with a single E-walker, (b) shows partially shared responsibilities among the E-walkers, (c) shows equally shared responsibilities between E-walkers, and (d) shows assembly carried out in two separate units, which is the safer assembly option. Image Credit: Nair et al. 2024.

Advanced tools like robotics and AI will be mainstays in the future of space exploration. It’s almost impossible to imagine a future where they aren’t critical, especially as our goals become more complex. “The capability to assemble complex systems in orbit using one or more robots will be an absolute requirement for supporting a resilient future orbital ecosystem,” the authors write. “In the forthcoming decades, newer infrastructures in the Earth’s orbits, which are much more advanced than the International Space Station, are needed for in-orbit servicing, manufacturing, recycling, orbital warehouse, Space-based Solar Power (SBSP), and astronomical and Earth-observational stations.”

The authors point out that their work is based on some assumptions and theoretical models. The E-walker concept still needs a lot of work, but a prototype is being developed.

It’s likely that the E-walker or some similar system will eventually be used to build telescopes, space stations, and other infrastructure.

The post A Space Walking Robot Could Build a Giant Telescope in Space appeared first on Universe Today.

Categories: Astronomy

New Report Details What Happened to the Arecibo Observatory

Tue, 11/05/2024 - 8:05pm

In 1963, the Arecibo Observatory became operational on the island of Puerto Rico. Measuring 305 meters (~1000 ft) in diameter, Arecibo’s spherical reflector dish was the largest radio telescope in the world at the time – a record it maintained until 2016 with the construction of the Five-hundred-meter Aperture Spherical Telescope (FAST) in China. In December 2020, Arecibo’s reflector dish collapsed after some of its support cables snapped, leading the National Science Foundation (NSF) to decommission the Observatory.

Shortly thereafter, the NSF and the University of Central Florida launched investigations to determine what caused the collapse. After nearly four years, the Committee on Analysis of Causes of Failure and Collapse of the 305-Meter Telescope at the Arecibo Observatory released an official report that details their findings. According to the report, the collapse was due to weakened infrastructure caused by long-term zinc creep-induced failure in the telescope’s cable sockets and previous damage caused by Hurricane Maria.

The massive dish was originally called the Arecibo Ionospheric Observatory and was intended for ionospheric research in addition to radio astronomy. The former task was part of the Advance Research Projects Agency’s (ARPA) Defender Program, which aimed to develop ballistic missile defenses. By 1967, the NSF took over the administration of Arecibo, henceforth making it a civilian facility dedicated to astronomy research. By 1971, NASA signed a memo of understanding to share the costs of maintaining and upgrading the facility.

Radar images of 1991 VH and its satellite by Arecibo Observatory in 2008. Credit: NSF

During its many years of service, the Arecibo Observatory accomplished some amazing feats. This included the first-ever discovery of a binary pulsar in 1974, which led to the discovery team (Russell A. Hulse and Joseph H. Taylor) being awarded the Nobel Prize in physics in 1993. In 1985, the observatory discovered the binary asteroid 4337 Arecibo in the outer regions of the Main Asteroid Belt. In 1992, Arecibo discovered the first exoplanets, two rocky bodies roughly four times the mass of Earth around the pulsar PSR 1257+12. This was followed by the discovery of the first repeating Fast Radio Burst (FRB) in 2016.

The observatory was also responsible for sending the famous Arecibo Message, the most powerful broadcast ever beamed into space and humanity’s first true attempt at Messaging Extraterrestrial Intelligence (METI). The pictorial message, which was crafted by a group of Cornell University and Arecibo scientists, which included Frank Drake (creator of the Drake equation), famed science communicator and author Carl Sagan, Richard Isaacman, Linda May, and James C.G. Walker, was aimed at the globular star cluster M13.

According to the Committee report, the structural failure began in 2017 when Hurricane Maria hit the Observatory on September 20th, 2017:

“Maria subjected the Arecibo Telescope to winds of between 105 and 118 mph, with the source of this uncertainty in wind speed discussed below... Based on a review of available records, the winds of Hurricane Maria subjected the Arecibo Telescope’s cables to the highest structural stress they had ever endured since it opened in 1963.”

However, inspections conducted after the hurricane concluded that “no significant damage had jeopardized the Arecibo Telescope’s structural integrity.” Repairs were nonetheless ordered, but the report identified several issues that caused these repairs to be delayed for years. Even so, the investigation indicated that due to the misdirection of repairs “toward components and replacement of a main cable that ultimately never failed,” these would not have prevented the Observatory’s collapse regardless.

Aerial view of the damage to the Arecibo Observatory following the collapse of the of the telescope platform on December 1st, 2020. Credit: Deborah Martorell

Moreover, in August and November of 2020, an auxiliary and main cable suffered a structural failure, which led to the NSF announcing that they would decommission the telescope through a controlled demolition to avoid a catastrophic collapse. They also stated that the other facilities in the observatory would remain operational, such as the Ángel Ramos Foundation Visitor Center. Before that could occur, however, more support cables buckled on December 1st, 2020, causing the instrument platform to collapse into the dish.

This collapse also removed the tops of the support towers and partially damaged some of the Observatory’s other buildings. Mercifully, no one was hurt. According to the report, the Arecibo Telescope’s cable spelter sockets had degraded considerably, as indicated by the previous cable failures. They also explain that the collapse was triggered by “hidden outer wire failures,” which had already fractured due to shear stress from zinc creep (aka. zinc decay) in the telescope’s cable spelter sockets.

This issue was not identified in the post-Mariah inspection, leading to a lack of consideration of degradation mechanisms and an overestimation of the potential strength of the other cables. According to NSF statements issued in October 2022 and September 2023, the observatory will be remade into an education center known as Arecibo C3, focused on Ciencia (Science), Computación (Computing), and fostering Comunidad (Community). So, while the observatory’s long history of radio astronomy may have ended, it will carry on as a STEM research center, and its legacy will endure.

Further Reading: National Academies Press, Gizmodo

The post New Report Details What Happened to the Arecibo Observatory appeared first on Universe Today.

Categories: Astronomy

We Understand Rotating Black Holes Even Less Than We Thought

Tue, 11/05/2024 - 2:44pm

Black holes are real. We see them throughout the cosmos, and have even directly imaged the supermassive black hole in M87 and our own Milky Way. We understand black holes quite well, but the theoretical descriptions of these cosmic creatures still have nagging issues. Perhaps the most famous issue is that of the singularity. According to the classical model of general relativity, all the matter that forms a black hole must be compressed into an infinite density, enclosed within a sphere of zero volume. We assume that somehow quantum physics will avert this problem, though without a theory of quantum gravity, we aren’t sure how. But the singularity isn’t the only infinite problem. Take, for example, the strange boundary known as the Cauchy horizon.

When you get down to it, general relativity is a set of complex differential equations. To understand black holes, you must solve these equations subject to a set of conditions such as the amount of mass, rotation, and electromagnetic charge. The equations are so complex that physicists often focus on connecting solutions to certain mathematical boundaries, or horizons. For example, the event horizon is a boundary between the inside and outside of a black hole. It’s one of the easier horizons to explain because if you happen to cross the event horizon of a black hole, you are forever trapped within it. The event horizon is like a cosmic Hotel California.

For a simple, non-rotating black hole, the event horizon is the only one that really matters. But for rotating black holes, things get really weird. To begin with, the singularity becomes a ring, not a point. And rather than a single event horizon, there is an outer and an inner horizon. The outer one still acts as an event horizon, forever trapping what dares to cross its boundary. The inner one is what’s often called the Cauchy horizon. If you cross the inner horizon, you are still trapped within, but you aren’t necessarily doomed to fall ever closer toward the singularity. Within the Cauchy horizon, spacetime can behave somewhat normally, though it is bounded.

Horizon structure for a rotating black hole. Credit: Simon Tyran, via Wikipedia

The Cauchy horizon can cause all sorts of strange things, but one of them is that the horizon is unstable. If you try to determine perturbations of the horizon, the calculated mass within the horizon diverges, an effect known as mass inflation. It’s somewhat similar to the way the singularity approaches infinite density in the classical model. While this is frustrating, physicists can sweep it under the rug by invoking the principle of cosmic censorship. It basically says that as long as some basic conditions hold, all the strange behaviors like singularities and mass inflation are always bounded by an event horizon. There may be an infinity of mathematical demons in a black hole, but they can never escape, so we don’t really need to worry about them.

But a new paper may have handed those demons a key. The paper shows that mass inflation can occur even without a Cauchy horizon. Without an explicit Cauchy horizon, those basic conditions for cosmic censorship don’t necessarily apply. This suggests that the black hole solutions we get from general relativity are flawed. They can describe black holes that exist for a limited time, but not the long-lasting black holes that actually exist.

What this means isn’t entirely clear. It might be that this impermanent loophole is just general relativity’s way of pointing us toward a quantum theory of gravity. After all, if Hawking radiation is real, all black holes are impermanent and eventually evaporate. But the result could also suggest that general relativity is only partially correct, and what we need is an extension of Einstein’s model the way GR extended Newtonian gravity. What is clear is that our understanding of black holes is incomplete.

Reference: Carballo-Rubio, Raúl, et al. “Mass inflation without Cauchy horizons.” Physical Review Letters 133.18 (2024): 181402.

The post We Understand Rotating Black Holes Even Less Than We Thought appeared first on Universe Today.

Categories: Astronomy

Habitable Worlds are Found in Safe Places

Tue, 11/05/2024 - 2:30pm

When we think of exoplanets that may be able to support life, we hone in on the habitable zone. A habitable zone is a region around a star where planets receive enough stellar energy to have liquid surface water. It’s a somewhat crude but helpful first step when examining thousands of exoplanets.

However, there’s a lot more to habitability than that.

In a dense stellar environment, planets in habitable zones have more than their host star to contend with. Stellar flybys and exploding supernovae can eject habitable zone exoplanets from their solar systems and even destroy their atmospheres or the planets themselves.

New research examines the threats facing the habitable zone planets in our stellar neighbourhood. The study is “The 10 pc Neighborhood of Habitable Zone Exoplanetary Systems: Threat Assessment from Stellar Encounters & Supernovae,” and it has been accepted for publication in The Astronomical Journal. The lead author is Tisyagupta Pyne from the Integrated Science Education And Research Centre at Visva-Bharati University in India.

The researchers examined the 10-parsec regions around the 84 solar systems with habitable zone exoplanets. Some of these Habitable Zone Systems (HZS) face risks from stars outside of the solar systems. How do these risks affect their habitability? What does it mean for our notion of the habitable zone?

“Among the 4,500+ exoplanet-hosting stars, about 140+ are known to host planets in their habitable zones,” the authors write. “We assess the possible risks that local stellar environment of these HZS pose to their habitability.”

This image from the research shows the sky positions of exoplanet-hosting stars projected on a Molleweide map. HZS are denoted by yellow-green circles, while the remaining population of exoplanets is represented by gray circles. The studied sample of 84 HZS, located within 220 pc of the Sun, is represented by crossed yellow-green circles. The three high-density HZS located near the galactic plane are labeled 1, 2 and 3 in white. The colour bar represents the stellar density, i.e., the number of stars having more than 15 stars within a radius of 5 arc mins. Image Credit: Pyne et al. 2024.

We have more than 150 confirmed exoplanets in habitable zones, and as exoplanet science advances, scientists are developing a more detailed understanding of what habitable zone means. Scientists increasingly use the terms conservative habitable zone and optimistic habitable zone.

The optimistic habitable zone is defined as regions that receive less radiation from their star than Venus received one billion years ago and more than Mars did 3.8 billion years ago. Scientists think that recent Venus (RV) and early Mars (EM) both likely had surface water.

The conservative habitable zone is a more stringent definition. It’s a narrower region around a star where an exoplanet could have surface water. It’s defined by an inner runaway greenhouse edge where stellar flux would vaporize surface water and an outer maximum greenhouse edge where the greenhouse effect of carbon dioxide is dominated by Rayleigh scattering.

Those are useful scientific definitions as far as they go. But what about habitable stellar environments? In recent years, scientists have learned a lot about how stars behave, the characteristics of exoplanets, and how the two are intertwined.

“The discovery of numerous extrasolar planets has revealed a diverse array of stellar and planetary characteristics, making systematic comparisons crucial for evaluating habitability and assessing the potential for life beyond our solar system,” the authors write.

To make these necessary systematic comparisons, the researchers developed two metrics: the Solar Similarity Index (SSI) and the Neighborhood Similarity Index (NSI). Since main sequence stars like our Sun are conducive to habitability, the SSI compares our Solar System’s properties with those of other HZs. The NSI compares the properties of stars in a 10-parsec region around the Sun to the same size region around other HZSs.

This research is mostly based on data from the ESA’s Gaia spacecraft, which is building a map of the Milky Way by measuring one billion stars. But the further away an HZS is, or the dimmer the stars are, the more likely Gaia may not have detected every star, which affects the research’s results. This image shows Gaia’s data completeness. The colour scale indicates the faintest G magnitude at which the 95% completeness threshold is achieved. “Our sample of 84 HZS (green circles) has been overlaid on the map to visually depict the completeness of their respective neighbourhoods,” the authors write. Image Credit: Pyne et al. 2024.

These indices put habitable zones in a larger context.

“While the concept of HZ is vital in the search for habitable worlds, the stellar environment of the planet also plays an important role in determining longevity and maintenance of habitability,” the authors write. “Studies have shown that a high rate of catastrophic events, such as supernovae and close stellar encounters in regions of high stellar density, is not conducive to the evolution of complex life forms and the maintenance of habitability over long periods.”

When radiation and high-energy particles from a distant source reach a planet in a habitable zone, they can cause severe damage to Earth-like planets. Supernovae are a dangerous source of radiation and particles, and if one were to explode close enough to Earth, that would be the end of life. Scientists know that ancient supernovae have left their mark on Earth, but none of them were close enough to destroy the atmosphere.

“Our primary focus is to investigate the effects of SNe on the atmospheres of exoplanets or exomoons assuming their atmospheres to be Earth-like,” the authors write.

The first factor is stellar density. The more stars in a neighbourhood, the greater the likelihood of supernova explosions and stellar flybys.

“The astrophysical impacts of the stellar environment is a “low-probability, high-consequence” scenario
for the continuation of habitability of exoplanets,” the authors write. Though disruptive events like supernova explosions or close stellar flybys are unlikely, the consequences can be so severe that habitability is completely eliminated.

When it came to the supernova threat, the researchers looked at high-mass stars in stellar neighbourhoods since only massive stars explode. Pyne and her colleagues found high-mass stars with more than eight solar masses in the 10-parsec neighbourhoods of two HZS: TOI-1227 and HD 48265. “These high-mass stars are potential progenitors for supernova explosions,” the authors explain.

Only one of the HZS is at risk of a stellar flyby. HD 165155 has an encounter rate of ?1 in 5 Gyr period. That means it’s at greater risk of an encounter with another star that could eject planets from its habitable zone.

The team’s pair of indices, the SSI and the NSI, produced divergent results. “… we find that the stellar environments of the majority of HZS exhibit a high degree of similarity (NSI> 0.75) to the solar neighbourhood,” they explain. However, because of the wide variety of stars in HZS, comparing them to the Sun results in a wide range of SSI values.

We know the danger supernova explosions pose to habitability. The initial burst of radiation could kill anything on the surface of a planet too close. The ongoing radiation could strip away the atmospheres of some planets further away and could also cause DNA damage in any lifeforms exposed to it. For planets that are further away from the blast, the supernova could alter their climate and trigger extinctions. There’s no absolutely certain understanding of how far away a planet needs to avoid devastation, but many scientists say that within 50 light-years, a planet is probably toast.

We can see the results of some of the stellar flybys the authors are considering. Rogue planets, or free-floating planets (FPPs), are likely in their hapless situations precisely because a stellar interloper got too close to their HZS and disrupted the gravitational relationships between the planets and their stars. We don’t know how many of these FPPs are in the Milky Way, but there could be many billions of them. Future telescopes like the Nancy Grace Roman Space Telescope will help us understand how many there truly are.

An artist’s illustration of a rogue planet, dark and mysterious. Image Credit: NASA

Habitability may be fleeting, and our planet may be the exception. It’s possible that life appears on many planets in habitable zones but can’t last long due to various factors. From a great distance away, we can’t detect all the variables that go into exoplanet habitability.

However, we can gain an understanding of the stellar environments in which potentially habitable exoplanets exist, and this research shows us how.

The post Habitable Worlds are Found in Safe Places appeared first on Universe Today.

Categories: Astronomy

New Glenn Booster Moves to Launch Complex 36

Tue, 11/05/2024 - 2:21pm

Nine years ago, Blue Origin revealed the plans for their New Glenn rocket, a heavy-lift vehicle with a reusable first stage that would compete with SpaceX for orbital flights. Since that time, SpaceX has launched hundreds of rockets, while Blue Origin has been working mostly in secret on New Glenn. Last week, the company rolled out the first prototype of the first-stage booster to the launch complex at Cape Canaveral Space Force Station. If all goes well, we could see a late November test on the launch pad.

The test will be an integrated launch vehicle hot-fire which will include the second stage and a stacked payload.

Images posted on social media by Blue Origin CEO Dave Limp showed the 57-meter (188-foot)-long first stage with its seven BE-4 engines as it was transported from the production facility in Merritt Island, Florida — next to the Kennedy Space Center — to Launch Complex 36 at Cape Canaveral. Limp said that it was a 23-mile, multiple-hour journey “because we have to take the long way around.” The booster was carried by Blue Origin’s trailers called GERT (Giant Enormous Rocket Truck).

#NewGlenn’s GS1 is on the move! Our transporter comprises two trailers connected by cradles and a strongback assembly designed in-house. There are 22 axles and 176 tires on this transport vehicle. It’s towed by an Oshkosh M1070, a repurposed U.S. Army tank transporter, with 505… pic.twitter.com/4Qq7Ofq2g2

— Dave Limp (@davill) October 30, 2024

“Our transporter comprises two trailers connected by cradles and a strongback assembly designed in-house,” said Limp on X. “There are 22 axles and 176 tires on this transport vehicle…The distance between GERT’s front bumper and the trailer’s rear is 310’, about the length of a football field.”

Limp said the next step is to put the first and second stages together on the launch pad for the fully integrated hot fire dress rehearsal. The second stage recently completed its own hot fire at the launch site.

An overhead view of the New Glenn booster heading to launch complex 36 at Cape Canaveral during the night of Oct. 30, 2024. Credit: Blue Origin/Dave Limp.

Hopefully the test will lead to Blue Origin’s first ever launch to orbit. While the New Glenn rocket has had its share of delays, it seems Blue Origin has also taken a slow, measured approach to prepare for its first launch. In February of this year, a boilerplate of the rocket was finally rolled onto the launch pad at Cape Canaveral for testing.  Then in May 2024, New Glenn was rolled out again for additional testing. Now, the fully integrated test in the next few weeks will perhaps lead to a launch by the end of the year.

New Glenn’s seven engines will give it more than 3.8 million pounds of thrust on liftoff. The goal is for New Glenn to reuse its first-stage booster and the seven engines powering it, with recovery on a barge located downrange off the coast of Florida in the Atlantic Ocean.

New Glenn boosters are designed for 25 flights.

Blue Origin says New Glenn will launch payloads into high-energy orbits. It can carry more than 13 metric tons to geostationary transfer orbit (GTO) and 45 metric tons to low Earth orbit (LEO).

For the first flight, Blue Origin will be flying its own hardware as a payload, a satellite deployment technology called Blue Ring. Even though it doesn’t have a paying customer for the upcoming launch, it would be — if successful — the first of two required certification flights needed by the rocket by the U.S. Space Force so it could potentially be awarded future national security missions along with side SpaceX and United Launch Alliance (ULA.)

Additional details can be found at PhysOrg and NASASpaceflight.com.

The post New Glenn Booster Moves to Launch Complex 36 appeared first on Universe Today.

Categories: Astronomy

How Many Additional Exoplanets are in Known Systems?

Tue, 11/05/2024 - 10:05am

One thing we’ve learned in recent decades is that exoplanets are surprisingly common. So far, we’ve confirmed nearly 6,000 planets, and we have evidence for thousands more. Most of these planets were discovered using the transit method. though we there are other methods as well. Many stars are known to have multiple planets, such as the TRAPPIST-1 system with seven Earth-sized worlds. But even within known planetary systems there could be planets we’ve overlooked. Perhaps their orbit doesn’t pass in front of the star from our vantage point, or the evidence of their presence is buried in data noise. How might we find them? A recent paper on the arXiv has an interesting approach.

Rather than combing through the observational data trying to extract more planets from the noise, the authors suggest that we look at the orbital dynamics of known systems to see if planets might be possible between the planets we know. Established systems are millions or billions of years old, so their planetary orbits must be stable on those timescales. If the planets of a system are “closely packed,” then adding new planets to the mix would cause the system to go all akilter. If the system is “loosely packed,” then we could add hypothetical planets between the others, and the system would still be dynamically stable.

The seven planetary systems considered. Credit: Horner, et al

To show how this would work, the authors consider seven planetary systems discovered by the Transiting Exoplanet Survey Satellite (TESS) known to have two planets. Since it isn’t likely that a system has only two planets, there is a good chance they have others. The team then ran thousands of simulations of these systems with hypothetical planets, calculating if they could remain stable over millions of years. They found that for two of the systems, extra planets (other than planets much more distant than the known ones) could be ruled out on dynamical grounds. Extra planets would almost certainly destabilize the systems. But five of the systems could remain stable with more planets. That doesn’t mean those systems have more planets, only that they could.

One of the things this work shows is that most of the currently known exoplanetary systems likely have yet-undiscovered worlds. This approach could also help us sort systems to determine which ones might deserve a further look. We are still in the early stages of discovery, and we are gathering data with incredible speed. We need tools like this so we aren’t overwhelmed by piles of new data.

Reference: Horner, Jonathan, et al. “The Search for the Inbetweeners: How packed are TESS planetary systems?arXiv preprint arXiv:2411.00245 (2024).

The post How Many Additional Exoplanets are in Known Systems? appeared first on Universe Today.

Categories: Astronomy

Hubble and Webb are the Dream Team. Don't Break Them Up

Tue, 11/05/2024 - 3:16am

Many people think of the James Webb Space Telescope as a sort of Hubble 2. They understand that the Hubble Space Telescope (HST) has served us well but is now old, and overdue for replacement. NASA seems to agree, as they have not sent a maintenance mission in over fifteen years, and are already preparing to wind down operations. But a recent paper argues that this is a mistake. Despite its age, HST still performs extremely well and continues to produce an avalanche of valuable scientific results. And given that JWST was never designed as a replacement for HST — it is an infrared (IR) telescope) — we would best be served by operating both telescopes in tandem, to maximize coverage of all observations.

Let’s not fool ourselves: the Hubble Space Telescope (HST) is old, and is eventually going to fall back to Earth. Although it was designed to be repairable and upgradable, there have been no servicing missions since 2009. Those missions relied on the Space Shuttle, which could capture the telescope and provide a working base for astronauts. Servicing missions could last weeks, and only the Space Shuttle could transport the six astronauts to the telescope and house them for the duration of the mission.

Without those servicing missions, failing components can no longer be replaced, and the overall health of HST will keep declining. If nothing is done, HST will eventually stop working altogether. To avoid it becoming just another piece of space junk, plans are already being developed to de-orbit it and send it crashing into the Pacific Ocean. But that’s no reason to give up on it. It still has as clear a view of the cosmos as ever, and mission scientists are doing an excellent job of working around technical problems as they arise.

The James Webb Space Telescope was launched into space on Christmas dat in 2021. Its system of foldable hexagonal mirrors give it an effective diameter some 2.7 times larger than HST, and it is designed to see down into the mid-IR range. Within months of deployment, it had already seen things that clashed with existing models of how the Universe formed, creating a mini-crisis in some fields and leading unscrupulous news editors to write headlines questioning whether the “Big Bang Theory” was under threat!

This image of NASA’s Hubble Space Telescope was taken on May 19, 2009 after deployment during Servicing Mission 4. NASA

The reason JWST was able to capture such ancient galaxies is that it is primarily an IR telescope: As the Universe expands, photons from distant objects get red-shifted until stars that originally shone in visible light can now only be seen in the IR. But these IR views are proving extremely valuable in other scientific fields apart from cosmology. In fact, many of the most striking images released by JWST’s press team are IR images of familiar objects, revealing hidden complexities that had not been seen before.

This is a key difference between the two telescopes: While HST’s range overlaps slightly with JWST, it can see all the way up into ultraviolet (UV) wavelengths. HST was launched in 1990, seven years late and billions of dollars over budget. Its 2.4 meter primary element needed to be one of the most precisely ground mirrors ever made, because it was intended to be diffraction limited at UV wavelengths. Famously, avoidable problems in the testing process led to it being very precisely figured to a slightly wrong shape, leading to spherical aberration preventing it from coming to sharp focus.

Fortunately the telescope was designed from the start to be serviceable, and even returned to Earth for repairs by the Space Shuttle if necessary. In the end though, NASA opticians were able to design and build a set of corrective optics to solve the problem, and the COSTAR system was installed by astronauts on the first servicing mission. Over the years, NASA sent up three more servicing missions, to upgrade or repair components, and install new instruments.

Illustration of NASA’s James Webb Space Telescope. Credits: NASA

HST could be one of the most successful scientific instruments ever built. Since 1990, it has been the subject of approximately 1200 science press releases, each of which was read more than 400 million times. The more than 46,000 scientific papers written using HST data have been cited more than 900,000 times! And even in its current degraded state, it still provided data for 1435 papers in 2023 alone.

JWST also ran over time and over budget, but had a far more successful deployment. Despite having a much larger mirror, with more than six times the collecting area of HST, the entire observatory only weighs half as much as HST. Because of its greater sensitivity, and the fact that it can see ancient light redshifted into IR wavelengths, it can see far deeper into the Universe than HST. It is these observations, of galaxies formed when the Universe was extremely young (100 – 180 million years), that created such excitement shortly after it was deployed.

As valuable as these telescopes are, they will not last forever. JWST is located deep in space, some 1.5 million kilometers from Earth near the L2 Lagrange point. When it eventually fails, it will become just another piece of Solar System debris orbiting the Sun in the vast emptiness of the Solar System. HST, however, is in Low Earth Orbit (LEO), and suffers very slight amounts of drag from the faint outer reaches of the atmosphere. Over time it will gradually lose speed, drifting downwards until it enters the atmosphere proper and crashes to Earth. Because of its size, it will not burn up completely, and large chunks will smash into the surface.

Because it cannot be predicted where exactly it will re-enter, mission planners always intended to capture it with the Space Shuttle and return it to Earth before this happened. Its final resting place was supposed to be in display in a museum, but unfortunately the shuttle program was cancelled. The current plan is to send up an uncrewed rocket which will dock with the telescope (a special attachment was installed on the final servicing mission for this purpose), and deorbit it in a controlled way to ensure that its pieces land safely in the ocean.

You can find the original paper at https://arxiv.org/abs/2410.01187

The post Hubble and Webb are the Dream Team. Don't Break Them Up appeared first on Universe Today.

Categories: Astronomy

Scientists Have Figured out why Martian Soil is so Crusty

Mon, 11/04/2024 - 7:13pm

On November 26th, 2018, NASA’s Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport (InSight) mission landed on Mars. This was a major milestone in Mars exploration since it was the first time a research station had been deployed to the surface to probe the planet’s interior. One of the most important instruments InSight would use to do this was the Heat Flow and Physical Properties Package (HP3) developed by the German Aerospace Center (DLR). Also known as the Martian Mole, this instrument measured the heat flow from deep inside the planet for four years.

The HP3 was designed to dig up to five meters (~16.5 ft) into the surface to sense heat deeper in Mars’ interior. Unfortunately, the Mole struggled to burrow itself and eventually got just beneath the surface, which was a surprise to scientists. Nevertheless, the Mole gathered considerable data on the daily and seasonal fluctuations below the surface. Analysis of this data by a team from the German Aerospace Center (DLR) has yielded new insight into why Martian soil is so “crusty.” According to their findings, temperatures in the top 40 cm (~16 inches) of the Martian surface lead to the formation of salt films that harden the soil.

The analysis was conducted by a team from the Microgravity User Support Center (MUSC) of the DLR Space Operations and Astronaut Training Institution in Cologne, which is responsible for overseeing the HP3 experiment. The heat data it obtained from the interior could be integral to understanding Mars’s geological evolution and addressing theories about its core region. At present, scientists suspect that geological activity on Mars largely ceased by the late Hesperian period (ca. 3 billion years ago), though there is evidence that lava still flows there today.

The “Mars Mole,” Heat Flow and Physical Properties Package (HP³). Credit: DLR

This was likely caused by Mars’ interior cooling faster due to its lower mass and lower pressure. Scientists theorize that this caused Mars’ outer core to solidify while its inner core became liquid—though this remains an open question. By comparing the subsurface temperatures obtained by InSight to surface temperatures, the DLR team could measure the rate of heat transport in the crust (thermal diffusivity) and thermal conductivity. From this, the density of the Martian soil could be estimated for the first time.

The team determined that the density of the uppermost 30 cm (~12 inches) of soil is comparable to basaltic sand – something that was not anticipated based on orbiter data. This material is common on Earth and is created by weathering volcanic rock rich in iron and magnesium. Beneath this layer, the soil density is comparable to consolidated sand and coarser basalt fragments. Tilman Spohn, the principal investigator of the HP3 experiment at the DLR Institute of Planetary Research, explained in a DLR press release:

“To get an idea of the mechanical properties of the soil, I like to compare it to floral foam, widely used in floristry for flower arrangements. It is a lightweight, highly porous material in which holes are created when plant stems are pressed into it... Over the course of seven Martian days, we measured thermal conductivity and temperature fluctuations at short intervals.

Additionally, we continuously measured the highest and lowest daily temperatures over the second Martian year. The average temperature over the depth of the 40-centimetre-long thermal probe was minus 56 degrees Celsius (217.5 Kelvin). These records, documenting the temperature curve over daily cycles and seasonal variations, were the first of their kind on Mars.”

NASA’s In­Sight space­craft land­ed in the Ely­si­um Plani­tia re­gion on Mars on 26 Novem­ber 2018. Credit: Credit: NASA-JPL/USGS/MOLA/DLR

Because the encrusted Martian soil (aka. “duricrust”) extends to a depth of 20 cm (~8 inches), the Mole managed to penetrate just a little more than 40 cm (~16 inches) – well short of its 5 m (~16.5 ft) objective. Nevertheless, the data obtained at this depth has provided valuable insight into heat transport on Mars. Accordingly, the team found that ground temperatures fluctuated by only 5 to 7 °C (9 to 12.5 °F) during a Martian day, a tiny fraction of the fluctuations observed on the surface—110 to 130 °C (230 to 266 °F).

Seasonally, they noted temperature fluctuation of 13 °C (~23.5 °F) while remaining below the freezing point of water on Mars in the layers near the surface. This demonstrates that the Martian soil is an excellent insulator, significantly reducing the large temperature differences at shallow depths. This influences various physical properties in Martian soil, including elasticity, thermal conductivity, heat capacity, the movement of material within, and the speed at which seismic waves can pass through them.

“Temperature also has a strong influence on chemical reactions occurring in the soil, on the exchange with gas molecules in the atmosphere, and therefore also on potential biological processes regarding possible microbial life on Mars,” said Spohn. “These insights into the properties and strength of the Martian soil are also of particular interest for future human exploration of Mars.”

What was particularly interesting, though, is how the temperature fluctuations enable the formation of salty brines for ten hours a day (when there is sufficient moisture in the atmosphere) in winter and spring. Therefore, the solidification of this brine is the most likely explanation for the duricrust layer beneath the surface. This information could prove very useful as future missions explore Mars and attempt to probe beneath the surface to learn more about the Red Planet’s history.

Further Reading: DLR

The post Scientists Have Figured out why Martian Soil is so Crusty appeared first on Universe Today.

Categories: Astronomy