Most celestial bodies – from stars and nebulae to quasars and galaxies – emit light at different wavelengths. Some contain visible light, allowing astronomers to photograph them with space telescopes such as Hubble. But the James Webb Space Telescope and the Chandra X-ray Observatory peer at celestial objects in infrared and X-ray wavelengths invisible to the human eye. That data is often translated into visible colors to produce spectacular space images. Now a group of astronomers is making those images accessible to a wider audience, including the visually impaired, by converting the data into almost musical sequences of sounds.
“If you just take a picture of a Chandra image or any other NASA image, you can leave people behind,” said Kim Arcand, a visualization scientist who is collaborating with a small, independent group of astronomers and musicians on a science and art project called SYSTEM Sounds. . Arcand, who describes himself as a former choir and band nerd, is also the emerging technical lead for NASA’s Chandra Observatory. Until a few years ago, this meant activities such as adding sound to scientific outreach programs for virtual and augmented reality. Then Arcand, along with a few others who became the SYSTEM Sounds group, began converting X-ray data into audio. “We’ve had such a positive response from people, both sighted and blind or visually impaired, that it’s the project that keeps on giving,” she says. Today, the group also partners with NASA’s Universe of Learning, a program that provides resources for science education.
Visual images from the JWST or Chandra instruments are artificial in a sense, as they use false colors to represent invisible frequencies. (If you actually traveled to these deep-space locations, they would look different.) Similarly, Arcand and the SYSTEM Sounds team translate image data at infrared and X-ray wavelengths into sounds rather than optical colors. They call these “sonifications,” and they are intended to provide a new way to experience cosmic phenomena, such as the birth of stars or the interactions between galaxies.
Translating a 2D image into sound starts with the individual pixels of the image. Each can contain different types of data, such as X-ray frequencies from Chandra and infrared frequencies from Webb. These can then be assigned to sound frequencies. Anyone, even a computer program, can make a 1-to-1 conversion between pixels and simple beeps and booms. “But if you’re trying to tell a scientific story about the object,” says Arcand, “music can help tell that story.”
That’s where Matt Russo, an astrophysicist and musician, comes in. He and his colleagues choose a particular image and then feed the data into sound editing software they wrote in Python. (It works a bit like GarageBand.) Like cosmic conductors, they have to make musical choices: they select instruments to represent certain wavelengths (such as an oboe or flute to represent the near-infrared or mid-infrared), and which objects to draw the listener’s attention to, in what order and at what speed – similar to panning through a landscape.
They guide the listener through the picture by drawing attention to one object at a time or a selected group so that they can be distinguished from other things in the frame. “You can’t reproduce everything in the picture with sound,” says Russo. “You have to accentuate the things that matter most.” For example, they can highlight a particular galaxy within a cluster, the arm of a spiral galaxy unfurling, or a bright star exploding. They also try to distinguish between the foreground and background of a scene: a bright star from the Milky Way might crash a cymbal, while the light from distant galaxies would produce more muted tones.