Posts with «science» label

India is the first country to land at the Moon's south pole

India just made spaceflight history in more ways than one. The Chandryaan-3 spacecraft's Vikram lander has successfully touched down on the Moon, marking the country's first successful landing on the lunar surface. It's just the fourth country to do so after the Soviet Union, US and China. More importantly, it's the first country to land near the Moon's south pole — a difficult target given the rough terrain, but important for attempts to find water ice. Other nations have only landed near the equator.

The landing comes four years after Chandryaan-2's Vikram lander effectively crashed. The Indian Space Research Organization (ISRO) designed the follow-up with a "failure-based design" that includes more backup systems, a wider landing area and software updates.

HISTORY HAS BEEN MADE#Chandrayaan3's successful landing means that India is now the 4th country to soft-land a spacecraft on the Moon, and we are now the ONLY country to land successfully near the south pole of the Moon! 🇮🇳🌖 #ISROpic.twitter.com/1D8Bdo4r8F

— ISRO Spaceflight (@ISROSpaceflight) August 23, 2023

Vikram will remain idle for hours to allow lunar dust to settle. Once the area is clear, the Pragyaan rover will deploy to take photos and collect scientific data. Combined, the lander and rover have five instruments meant to gauge the properties of the Moon's atmosphere, surface and tectonic activity. ISRO timed the landing for the start of a lunar day (about 28 Earth days) to maximize the amount of solar power available for Vikram and Pragyaan.

Chandryaan-3's success is a matter of national pride for India. The country has been eager to become a major power in spaceflight, and hopes to launch a space station around 2030. It can now claim to be one of just a handful of countries that have ever reached an extraterrestrial surface. The info gathered near the pole could also be crucial for future lunar missions from India and other countries, which could use any discovered ice for fuel, oxygen and water.

The landing also puts India ahead of other countries racing to land on the Moon, if not always for the first time. Russia's Luna-25 spacecraft crashed just two days earlier, and Israel expects a follow-up to its Beresheet lander in 2024. The United Arab Emirates also wants to land by 2024. The US, meanwhile, hopes to return people to the moon with its Artemis 3 mission in late 2025. These also din't include commercial efforts. There's a renewed interest in Earth's closest cosmic neighbor, and India is now part of that vanguard.

This article originally appeared on Engadget at https://www.engadget.com/india-is-the-first-country-to-land-at-the-moons-south-pole-133322596.html?src=rss

Webb Space Telescope captures the Ring Nebula in mesmerizing detail

The James Webb Space Telescope (JWST) captured extraordinarily detailed images published today of the Ring Nebula. The gaseous cloud, also called M57 and NGC 6720, contains 20,000 dense globules rich in molecular hydrogen. It sits about 2,500 light years away from Earth.

The first image (above) was taken with the NIRCam (Near InfraRed Camera), one of the Webb Space Telescope’s primary sensors. It is designed to detect light in the near-infrared spectrum and can capture remarkably detailed images. NIRCam also took the equally hypnotizing updated image of the Pillars of Creation.

Meanwhile, the second image (below) was captured using the JWST’s MIRI (Mid-InfraRed Instrument). It better highlights the nebula’s (roughly) ten concentric arcs beyond its outer edge, likely formed from its central star’s interaction with a lower-mass companion in its orbit. “In this way, nebulae like the Ring Nebula reveal a kind of astronomical archaeology, as astronomers study the nebula to learn about the star that created it,” the European Space Agency wrote in a press release.

ESA / Webb / NASA / CSA / M. Barlow / N. Cox / R. Wesson

The Ring Nebula was discovered somewhat serendipitously in 1779 by French astronomers Charles Messier and Antoine Darquier de Pellepoix while they searched for comets. It’s a planetary nebula, named as such because early researchers mistook their appearances for distant worlds. The Ring Nebula formed from a medium-sized star that shed its outer layers as it exhausted its fuel and approached its demise.

“The colourful main ring is composed of gas thrown off by a dying star at the centre of the nebula,” the ESA wrote. “This star is on its way to becoming a white dwarf — a very small, dense, and hot body that is the final evolutionary stage for a star like the Sun.”

This article originally appeared on Engadget at https://www.engadget.com/webb-space-telescope-captures-the-ring-nebula-in-mesmerizing-detail-213005773.html?src=rss

Russia's Luna-25 spacecraft crashes into the Moon

Russia's first attempt to land on the Moon since 1976 has ended in disapppointment. Ten days after its August 10th launch, Russia's state-run space agency, Roscosmos, confirmed its Luna-25 spacecraft had spun out of control and rammed into the Moon. "The apparatus moved into an unpredictable orbit and ceased to exist as a result of a collision with the surface of the Moon," Roscosmos explained in a statement. The organization initially reported the incident as an "abnormal situation" before sharing news of the crash. 

Luna-25 was headed to the south pole to find water ice and spend a year analyzing how it emerged there and if there was a link with water appearing on Earth. It was also set to test drive technology and examine the regolith (the soil covering moon rock). The plan was for it to remain in the moon's orbit for five days before touching down on August 21st. Luna-25 took a range of images pre-crash, including one of the Zeeman crater, near the <oon's south pole.

🌘 Welcome to the other side of the #moon!

👉 Russia's #Luna25 has shared first pics of lunar surface – they show Zeeman crater on the moon's far side.

The ultimate goal is to land on the moon's South Pole in search of water. Looking forward to new amazing photos from space 😍 pic.twitter.com/kRlnJBFLwM

— Russia 🇷🇺 (@Russia) August 19, 2023

If successful, it would have been the first craft to land on the south pole — a title that may now go to India. Russia was racing to beat India, whose spacecraft launched on July 14th and is expected to land on the Moon on August 23rd.

Countries across the globe are gearing up for their own moon missions. Currently, the United States plans to have humans orbit the Moon in 2024 and land on it in 2025. China, Japan, Mexico, Canada and Israel are among the other nations with active plans to reach the Moon.

This article originally appeared on Engadget at https://www.engadget.com/russias-luna-25-spacecraft-crashes-into-the-moon-093542172.html?src=rss

Scientists recreate an iconic Pink Floyd song by scanning listeners' brains

You know when a certain song comes on and it encompasses your whole being for a few minutes? Music has a way of causing a unique and engaging stimulation in your brain, one that scientists are working to understand and mimic. Such was the case in a recent study published in PLOS Biology in which researchers successfully implemented technology that recreated Pink Floyd’s Another Brick in the Wall, Part 1 solely using brain activity. It utilized a technique known as stimulus reconstruction and built on previous innovations allowing researchers to recreate a song akin to the one a person had heard.

The 29 participants had pharmacoresistant epilepsy and intracranial grids or strips of electrodes which had been surgically implanted to aid in their treatment. Researchers utilized these electrodes to record activity across multiple auditory regions of the individuals’ brains that process aspects of music like lyrics and harmony — while the participants actively listened to Another Brick in the Wall, Part 1. The entirety of the recordings took place at Albany Medical Center, in upstate New York.

Scientists used AI to analyze then create a copy of the words and sounds participants had heard. Though the final product was quite muffled, but the song is clear to anyone listening so you can check it out for yourselfa. The researchers are also confident that they could increase its quality in future attempts.

The listening experience primarily engaged the right side of participants’ brains, mostly in the superior temporal gyrus and especially when absorbing unique music. There was also a small level of stimulation in the left side of the brain. Researchers further found that a point in the brain’s temporal lobe ignited when the 16th notes of the rhythm guitar played while the song played at 99 beats per minute. 

This finding could provide more insight into the part that area plays in processing rhythm. It could also aid in restoring people who have lost their speech ability, through conditions like amyotrophic lateral sclerosis (ALS). Instead of creating a monotone, almost robot-like response, better understanding the way a brain processes and responds to music might lead to more fluid prosthetics for speech.

This article originally appeared on Engadget at https://www.engadget.com/scientists-recreate-an-iconic-pink-floyd-song-by-scanning-listeners-brains-114053359.html?src=rss

Astronomers confirm Maisie’s galaxy is one of the oldest observed

Astronomers have used advanced instruments to calculate a more accurate age of Maisie’s galaxy, discovered by the James Webb Space Telescope (JWST) in June 2022. Although the star system isn’t quite as old as initially estimated, it’s still one of the oldest recorded, from 390 million years after the Big Bang — making it about 13.4 billion years old. That’s a mere 70 million years younger than JADES-GS-z13-0, the (current) oldest-known system.

A team led by the University of Texas at Austin astronomer Steven Finkelstein discovered the system last summer. (The name “Maisie’s galaxy” is an ode to his daughter because they spotted it on her birthday.) The group initially estimated that it was only 290 million years after the Big Bang, but analyzing the galaxy with more advanced equipment revealed it’s about 100 million years older than that. “The exciting thing about Maisie’s galaxy is that it was one of the first distant galaxies identified by JWST, and of that set, it’s the first to actually be spectroscopically confirmed,” said Finkelstein.

The spectroscopic confirmation came courtesy of the JWST’s Near InfraRed Spectrograph (NIRSpec) conducted by the Cosmic Evolution Early Release Science Survey (CEERS). The NIRSpec “splits an object’s light into many different narrow frequencies to more accurately identify its chemical makeup, heat output, intrinsic brightness and relative motion.” Redshift — the movement of light towards longer (redder) wavelengths to indicate motion away from the observer — held the key to more accurate dating than the original photometry-based estimate. The advanced tools assigned a redshift of z=11.4 to Maisie’s galaxy, helping the researchers settle on the revised estimate of 390 million years after the Big Bang.

James Webb Space Telescope
ASSOCIATED PRESS

The astronomers also examined CEERS-93316, a galaxy initially estimated at 235 million years pre-Big Bang — which would have made it astonishingly old. After studying this system, it revealed a redshift of z=4.9, which places it at a mere one billion years after the Big Bang. The first faulty estimate about CEERS-93316 was understandable: The galaxy emitted an unusual amount of light in narrow frequency bands associated with oxygen and hydrogen, making it appear bluer than it was.

Finkelstein chalks up the miss to bad luck. “This was a kind of weird case,” he said. “Of the many tens of high redshift candidates that have been observed spectroscopically, this is the only instance of the true redshift being much less than our initial guess.” Finkelstein added, “It would have been really challenging to explain how the universe could create such a massive galaxy so soon. So, I think this was probably always the most likely outcome, because it was so extreme, so bright, at such an apparent high redshift.”

The CEERS team is now evaluating about 10 more systems that could be older than Maisie’s galaxy.

This article originally appeared on Engadget at https://www.engadget.com/astronomers-confirm-maisies-galaxy-is-one-of-the-oldest-observed-205246905.html?src=rss

Scientists genetically engineer bacteria to detect cancer cells

An international team of scientists has developed a new technology that can help detect (or even treat) cancer in hard-to-reach places, such as the colon. The team has published a paper in Science for the technique dubbed CATCH, or cellular assay for targeted, CRISPR-discriminated horizontal gene transfer. For their lab experiments, the scientists used a species of bacterium called Acinetobacter baylyi. This bacterium has the ability to naturally take up free-floating DNA from its surroundings and then integrate it into its own genome, allowing it to produce new protein for growth.  

What the scientists did was engineer A. baylyi bacteria so that they'd contain long sequences of DNA mirroring the DNA found in human cancer cells. These sequences serve as some sort of one-half of a zipper that locks on to captured cancer DNA. For their tests, the scientists focus on the mutated KRAS gene that's commonly found in colorectal tumors. If an A. baylyi bacterium finds a mutated DNA and integrates it into its genome, a linked antibiotic resistance gene also gets activated. That's what the team used to confirm the presence of cancer cells: After all, only bacteria with active antibiotic resistance could grow on culture plates filled with antibiotics. 

While the scientists were successfully able to detect tumor DNA in mice injected with colorectal cancer cells in the lab, the technology is still not ready to be used for actual diagnosis. The team said it's still working on the next steps, including improving the technique's efficiency and evaluating how it performs compared to other diagnostic tests. "The most exciting aspect of cellular healthcare, however, is not in the mere detection of disease. A laboratory can do that," Dan Worthley, one of the study's authors, wrote in The Conversation. In the future, the technology could also be used for targeted biological therapy that can deploy treatment to specific parts of the body based on the presence of certain DNA sequences. 

This article originally appeared on Engadget at https://www.engadget.com/scientists-genetically-engineer-bacteria-to-detect-cancer-cells-114511365.html?src=rss

Russia heads to the Moon for the first time in 47 years

Russia is heading back to the Moon as it tries to reassert itself as a significant world power in the wake of its war on Ukraine. A rocket carrying the Luna-25 craft will mark Russia’s first lunar mission since 1976. The expedition will attempt to land the exploration vehicle on the moon’s south pole, hoping to dig up water ice beneath the surface. You can tune in to watch the launch here.

The Soyuz 2.1v rocket carrying the lander is scheduled to lift off from the Vostochny spaceport in eastern Russia at 7:10 pm Eastern time. If successful, it would be the first spacecraft to make a soft landing on the Moon’s south pole. NASA confirmed in 2020 the discovery of water molecules in sunlit parts of the Moon’s surface. Salvageable water could mark a breakthrough for lunar exploration, providing future human lunar missions with life support, fuel (through extracted hydrogen) and even potential agriculture.

Russia’s space trip also serves as a salvo in its attempt to reestablish itself as a significant world power unmoved by the West’s sanctions over its 2022 invasion of Ukraine. The vessel’s name is even a callback to the Soviet Space Program: Its last mission was the Luna-24, which spent 13 days heading to the Moon and back to collect samples in 1976. Referencing an era when the Soviet Union was an undeniable world superpower fits with President Vladimir Putin’s goals to project an image of Russian preeminence.

Luna-25 is also in a race against India: the country’s Chandrayaan-3 mission launched on July 14th and entered the Moon’s orbit this week. India’s craft is scheduled to reach the Moon’s south pole on August 23rd. The Luna-25 will take five days to reach the Moon and is expected to spend five to seven days in orbit before touching down. That timeline has Russia’s lander potentially reaching the Moon around the same time as India’s, if not slightly ahead.

The craft is expected to conduct experiments — using its 68 lbs of research equipment — on the Moon for about a year. It includes a scoop that can capture samples up to a depth of 15 cm (six inches) in its hunt for frozen water.

You can watch the launch stream below starting at around 7:10 pm EDT.

This article originally appeared on Engadget at https://www.engadget.com/russia-heads-to-the-moon-for-the-first-time-in-47-years-203057705.html?src=rss

Watch Virgin Galactic's first ever space tourist flight at 11am ET

Virgin Galactic might hit another milestone today in its quest to provide trips to suborbital space. If the weather cooperates and everything goes as planned for the company, its first private passenger flight will be taking off from its Spaceport America facility at 11AM EDT. Virgin Galactic's inaugural commercial flight took place in late June, but that one carried Italian government workers, including two Air Force personnel, to space. This time, its three passengers are civilians, and one of them is even the company's first paying customer. 

That distinction goes to Jon Goodwin, a British Olympian who competed in the 1972 games in Munich as a canoeist. According to the BBC, Goodwin paid $250,000 for his ticket way back in 2005 and had been worried that he couldn't go through with the flight after he was diagnosed with Parkinson's disease in 2014. The other two passengers are a mother-daughter tandem from the Caribbean, Keisha Schahaff and Anastatia Mayers. Schahaff won two seats in a fundraising draw for nonprofit organization Space for Humanity and had chosen her daughter, a physics student at Aberdeen University in the UK, to accompany her. 

The company's VSS Unity spacecraft leaves the ground attached to a carrier aircraft dubbed VMS Eve. At an altitude of 50,000 feet, the mothership drops Unity, which then fires up its rocket motor to continue its journey to the edge of space. The spacecraft turns off its motor and glides across space before its descent, giving passengers three minutes to enjoy weightlessness in the cabin while looking at views of our planet through Unity's 17 windows. That is, at least, what the passengers are supposed to experience. As for the rest of us, we can watch them take off via Virgin Galactic's coverage of the launch livestreamed through its website.

Be a part of history TOMORROW as we launch the inspiring crew of #Galactic02 to space! Watch the livestream at 9:00 am MDT | 11:00 am EDT and sign up so you don't miss it: https://t.co/5UalYTpiHLpic.twitter.com/LmM7o9sTxM

— Virgin Galactic (@virgingalactic) August 9, 2023

This article originally appeared on Engadget at https://www.engadget.com/watch-virgin-galactics-first-ever-space-tourist-flight-at-11am-et-143100162.html?src=rss

ISS experiment will help scientists work out how to keep astronauts cool in space

On August 4th, Northrop Grumman's 19th resupply mission for the ISS arrived on the orbiting lab, carrying not just necessities for its inhabitants, but also an experiment that could greatly benefit future human colonies outside our planet. Specifically, the mission was carrying a module with hardware that could help us understand how heating and air conditioning systems can operate in reduced gravity and in the extreme temperatures observed on the moon and Mars. Daytime temperatures near the lunar equator, for instance, reach 250 degrees Fahrenheit, which is higher than the boiling point of water. At night, temperatures reach -208 degrees Fahrenheit. The lowest recorded temperature on Earth was -128.6 degrees Fahrenheit back in 1983.

The hardware was designed and built by scientists and engineers from Purdue University and NASA's Glenn Research Center in Cleveland. It will allow Purdue scientists to conduct the second part of their Flow Boiling and Condensation Experiment (FBCE), which has been collecting data aboard the ISS since 2021. They've already finished gathering data for the first part of their study that focuses on measuring the effects of reduced gravity on boiling. This part will now focus on investigating how condensation works in a reduced-gravity environment.

Issam Mudawar, the Purdue professor in charge of experiment, explained: "We have developed over a hundred years' worth of understanding of how heat and cooling systems work in Earth’s gravity, but we haven’t known how they work in weightlessness."

His team has published over 60 research papers on reduced gravity and fluid flow from the data they've collected so far, and they're in the midst of preparing more. They believe that in addition to providing the information needed to enable human colonies to live on the moon and on the red planet, their experiment could also provide the scientific understanding to enable spacecraft to travel longer distances and to refuel in orbit.

This article originally appeared on Engadget at https://www.engadget.com/iss-experiment-will-help-scientists-work-out-how-to-keep-astronauts-cool-in-space-081822506.html?src=rss

Why humans can't use natural language processing to speak with the animals

We’ve been wondering what goes on inside the minds of animals since antiquity. Dr. Doolittle’s talent was far from novel when it was first published in 1920; Greco-Roman literature is lousy with speaking animals, writers in Zhanguo-era China routinely ascribed language to certain animal species and they’re also prevalent in Indian, Egyptian, Hebrew and Native American storytelling traditions.

Even today, popular Western culture toys with the idea of talking animals, though often through a lens of technology-empowered speech rather than supernatural force. The dolphins from both Seaquest DSV and Johnny Mnemonic communicated with their bipedal contemporaries through advanced translation devices, as did Dug the dog from Up.

We’ve already got machine-learning systems and natural language processors that can translate human speech into any number of existing languages, and adapting that process to convert animal calls into human-interpretable signals doesn’t seem that big of a stretch. However, it turns out we’ve got more work to do before we can converse with nature.

What is language?

“All living things communicate,” an interdisciplinary team of researchers argued in 2018’s On understanding the nature and evolution of social cognition: a need for the study of communication. “Communication involves an action or characteristic of one individual that influences the behavior, behavioral tendency or physiology of at least one other individual in a fashion typically adaptive to both.”

From microbes, fungi and plants on up the evolutionary ladder, science has yet to find an organism that exists in such extreme isolation as to not have a natural means of communicating with the world around it. But we should be clear that “communication” and “language” are two very different things.

“No other natural communication system is like human language,” argues the Linguistics Society of America. Language allows us to express our inner thoughts and convey information, as well as request or even demand it. “Unlike any other animal communication system, it contains an expression for negation — what is not the case … Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation.”

That’s not to say that pets don’t understand us. “We know that dogs and cats can respond accurately to a wide range of human words when they have prior experience with those words and relevant outcomes,” Dr. Monique Udell, Director of the Human-Animal Interaction Laboratory at Oregon State University, told Engadget. “In many cases these associations are learned through basic conditioning,” Dr. Udell said — like when we yell “dinner” just before setting out bowls of food.

Whether or not our dogs and cats actually understand what “dinner” means outside of the immediate Pavlovian response — remains to be seen. “We know that at least some dogs have been able to learn to respond to over 1,000 human words (labels for objects) with high levels of accuracy,” Dr. Udell said. “Dogs currently hold the record among non-human animal species for being able to match spoken human words to objects or actions reliably,” but it’s “difficult to know for sure to what extent dogs understand the intent behind our words or actions.”

Dr. Udell continued: “This is because when we measure a dog or cat’s understanding of a stimulus, like a word, we typically do so based on their behavior.” You can teach a dog to sit with both English and German commands, but “if a dog responds the same way to the word ‘sit’ in English and in German, it is likely the simplest explanation — with the fewest assumptions — is that they have learned that when they sit in the presence of either word then there is a pleasant consequence.”

Tea Stražičić for Engadget/Silica Magazine

Hush, the computers are speaking

Natural Language Programming (NLP) is the branch of AI that enables computers and algorithmic models to interpret text and speech, including the speaker’s intent, the same way we meatsacks do. It combines computational linguistics, which models the syntax, grammar and structure of a language, and machine-learning models, which “automatically extract, classify, and label elements of text and voice data and then assign a statistical likelihood to each possible meaning of those elements,” according to IBM. NLP underpins the functionality of every digital assistant on the market. Basically any time you’re speaking at a “smart” device, NLP is translating your words into machine-understandable signals and vice versa.

The field of NLP research has undergone a significant evolution in recent years, as its core systems have migrated from older Recurrent and Convoluted Neural Networks towards Google’s Transformer architecture, which greatly increases training efficiency.

Dr. Noah D. Goodman, Associate Professor of Psychology and Computer Science, and Linguistics at Stanford University, told Engadget that, with RNNs, “you'll have to go time-step by time-step or like word by word through the data and then do the same thing backward.” In contrast, with a transformer, “you basically take the whole string of words and push them through the network at the same time.”

“It really matters to make that training more efficient,” Dr. Goodman continued. “Transformers, they're cool … but by far the biggest thing is that they make it possible to train efficiently and therefore train much bigger models on much more data.”

Talkin’ jive ain’t just for turkeys

While many species’ communication systems have been studied in recent years — most notably cetaceans like whales and dolphins, but also the southern pied babbler, for its song’s potentially syntactic qualities, and vervet monkeys’ communal predator warning system — none have shown the sheer degree of complexity as the call of the avian family Paridae: the chickadees, tits and titmice.

Dr. Jeffrey Lucas, professor in the Biological Sciences department at Purdue University, told Engadget that the Paridae call “is one of the most complicated vocal systems that we know of. At the end of the day, what the [field’s voluminous number of research] papers are showing is that it's god-awfully complicated, and the problem with the papers is that they grossly under-interpret how complicated [the calls] actually are.”

These parids often live in socially complex, heterospecific flocks, mixed groupings that include multiple songbird and woodpecker species. The complexity of the birds’ social system is correlated with an increased diversity in communications systems, Dr. Lucas said. “Part of the reason why that correlation exists is because, if you have a complex social system that's multi-dimensional, then you have to convey a variety of different kinds of information across different contexts. In the bird world, they have to defend their territory, talk about food, integrate into the social system [and resolve] mating issues.”

The chickadee call consist of at least six distinct notes set in an open-ended vocal structure, which is both monumentally rare in non-human communication systems and the reason for the Chickadee’s call complexity. An open-ended vocal system means that “increased recording of chick-a-dee calls will continually reveal calls with distinct note-type compositions,” explained the 2012 study, Linking social complexity and vocal complexity: a parid perspective. “This open-ended nature is one of the main features the chick-a-dee call shares with human language, and one of the main differences between the chick-a-dee call and the finite song repertoires of most songbird species.”

Tea Stražičić for Engadget/Silica Magazine

Dolphins have no need for kings

Training language models isn’t simply a matter of shoving in large amounts of data. When training a model to translate an unknown language into what you’re speaking, you need to have at least a rudimentary understanding of how the the two languages correlate with one another so that the translated text retains the proper intent of the speaker.

“The strongest kind of data that we could have is what's called a parallel corpus,” Dr. Goodman explained, which is basically having a Rosetta Stone for the two tongues. In that case, you’d simply have to map between specific words, symbols and phonemes in each language — figure out what means “river” or “one bushel of wheat” in each and build out from there.

Without that perfect translation artifact, so long as you have large corpuses of data for both languages, “it's still possible to learn a translation between the languages, but it hinges pretty crucially on the idea that the kind of latent conceptual structure,” Dr. Goodman continued, which assumes that both culture’s definitions of “one bushel of wheat” are generally equivalent.

Goodman points to the word pairs ’man and woman’ and ’king and queen’ in English. “The structure, or geometry, of that relationship we expect English, if we were translating into Hungarian, we would also expect those four concepts to stand in a similar relationship,” Dr. Goodman said. “Then effectively the way we'll learn a translation now is by learning to translate in a way that preserves the structure of that conceptual space as much as possible.”

Having a large corpus of data to work with in this situation also enables unsupervised learning techniques to be used to “extract the latent conceptual space,” Dr. Goodman said, though that method is more resource intensive and less efficient. However, if all you have is a large corpus in only one of the languages, you’re generally out of luck.

“For most human languages we assume the [quartet concepts] are kind of, sort of similar, like, maybe they don't have ‘king and queen’ but they definitely have ‘man and woman,’” Dr. Goodman continued. ”But I think for animal communication, we can't assume that dolphins have a concept of ‘king and queen’ or whether they have ‘men and women.’ I don't know, maybe, maybe not.”

And without even that rudimentary conceptual alignment to work from, discerning the context and intent of a animal’s call — much less, deciphering the syntax, grammar and semantics of the underlying communication system — becomes much more difficult. “You're in a much weaker position,” Dr. Goodman said. “If you have the utterances in the world context that they're uttered in, then you might be able to get somewhere.”

Basically, if you can obtain multimodal data that provides context for the recorded animal call — the environmental conditions, time of day or year, the presence of prey or predator species, etc — you can “ground” the language data into the physical environment. From there you can “assume that English grounds into the physical environment in the same way as this weird new language grounds into the physical environment’ and use that as a kind of bridge between the languages.”

Unfortunately, the challenge of translating bird calls into English (or any other human language) is going to fall squarely into the fourth category. This means we’ll need more data and a lot of different types of data as we continue to build our basic understanding of the structures of these calls from the ground up. Some of those efforts are already underway.

The Dolphin Communication Project, for example, employs a combination “mobile video/acoustic system” to capture both the utterances of wild dolphins and their relative position in physical space at that time to give researchers added context to the calls. Biologging tags — animal-borne sensors affixed to hide, hair, or horn that track the locations and conditions of their hosts — continue to shrink in size while growing in both capacity and capability, which should help researchers gather even more data about these communities.

What if birds are just constantly screaming about the heat?

Even if we won’t be able to immediately chat with our furred and feathered neighbors, gaining a better understanding of how they at least talk to each other could prove valuable to conservation efforts. Dr. Lucas points to a recent study he participated in that found environmental changes induced by climate change can radically change how different bird species interact in mixed flocks. “What we showed was that if you look across the disturbance gradients, then everything changes,” Dr. Lucas said. “What they do with space changes, how they interact with other birds changes. Their vocal systems change.”

“The social interactions for birds in winter are extraordinarily important because you know, 10 gram bird — if it doesn't eat in a day, it's dead,” Dr. Lucas continued. “So information about their environment is extraordinarily important. And what those mixed species flocks do is to provide some of that information.”

However that network quickly breaks down as the habitat degrades and in order to survive “they have to really go through fairly extreme changes in behavior and social systems and vocal systems … but that impacts fertility rates, and their ability to feed their kids and that sort of thing.”

Better understanding their calls will help us better understand their levels of stress, which can serve both modern conservation efforts and agricultural ends. “The idea is that we can get an idea about the level of stress in [farm animals], then use that as an index of what's happening in the barn and whether we can maybe even mitigate that using vocalizations,” Dr. Lucas said. “AI probably is going to help us do this.”

“Scientific sources indicate that noise in farm animal environments is a detrimental factor to animal health,” Jan Brouček of the Research Institute for Animal Production Nitra, observed in 2014. “Especially longer lasting sounds can affect the health of animals. Noise directly affects reproductive physiology or energy consumption.” That continuous drone is thought to also indirectly impact other behaviors including habitat use, courtship, mating, reproduction and the care of offspring. 

Conversely, 2021’s research, The effect of music on livestock: cattle, poultry and pigs, has shown that playing music helps to calm livestock and reduce stress during times of intensive production. We can measure that reduction in stress based on what sorts of happy sounds those animals make. Like listening to music in another language, we can get with the vibe, even if we can't understand the lyrics

This article originally appeared on Engadget at https://www.engadget.com/why-humans-cant-use-natural-language-processing-to-speak-with-the-animals-143050169.html?src=rss