NASA has announced a new streaming service called NASA+ that’s set to hit most major platforms next week. It’ll be completely free, with no subscription requirements, and you won’t be forced to sit through ads. NASA+ will be available starting November 8.
We launch more than rockets. This month, we launch our new streaming service, NASA+. https://t.co/McWnWOKXSu
No subscription req. No ads. No cost. Family friendly! Emmy-winning live shows Original series On most major platforms pic.twitter.com/5ffjptumUJ
The space agency previously teased the release of its upcoming streaming service over the summer as it more broadly revamped its digital presence. At the time, it said NASA+ would be available on the NASA iOS and Android apps, and streaming players including Roku, Apple TV and Fire TV. You’ll also be able to watch it on the web.
There aren’t too many details out just yet about the content itself, but NASA says its family friendly programming “embeds you into our missions” with live coverage and original video series. NASA already has its own broadcast network called NASA TV, and the new streaming service seems to be an expansion of that. But, we’ll know more when it officially launches next Wednesday.
This article originally appeared on Engadget at https://www.engadget.com/nasa-is-launching-a-free-streaming-service-with-live-shows-and-original-series-150128180.html?src=rss
Whether it's for a tour of the International Space Station (ISS) or a battle with Darth Vader, most VR enthusiasts are looking to get off this planet and into the great beyond. HTC, however, is sending VR headsets to the ISS to give lonely astronauts something to do besides staring into the star-riddled abyss.
The company partnered up with XRHealth and engineering firm Nord Space to send HTC VIVE Focus 3 headsets to the ISS as part of an ongoing effort to improve the mental health of astronauts in the midst of long assignments on the station. These headsets are pre-loaded with unique software that has been specifically designed to meet the mental health needs of literal space cadets, so they aren’t just for playing Walkabout Mini Golf during the off hours (though that’s not a bad idea.)
The headsets feature new camera tracking tech that was specially developed and adapted to work in microgravity, including eye-tracking sensors to better assess the mental health status of astronauts. These sensors are coupled with software intended to “maintain mental health while in orbit.” The headsets have also been optimized to stabilize alignment and, as such, reduce the chances of motion sickness. Can you imagine free-floating vomit in space?
Danish astronaut Andreas Mogensen will be the first ISS crew member to use the VR headset for preventative mental health care during his six-month mission as commander of the space station. HTC notes that astronauts are often isolated for “months and years at a time” while stationed in space.
This leads to the question of internet connectivity. After all, Mogensen and his fellow astronauts would likely want to connect with family and friends while wearing their brand-new VR headsets. Playing Population: One by yourself is not exactly satisfying.
The internet used to be really slow on the ISS, with speeds resembling a dial-up connection to AOL in 1995. However, recent upgrades have boosted Internet speeds to around 600 megabits-per-second (Mbps) on the station. As a comparison, the average download speed in the US is about 135 Mbps. So we’d actually be the bottleneck in this scenario, and not the astronauts. The ISS connection should allow for even the most data-hungry VR applications.
These souped-up Vive Focus 3 headsets are heading up to the space station shortly, though there’s no arrival date yet. It’s worth noting that it took some massive feats of engineering to even get these headsets to work in microgravity, as so many aspects of a VR headset depend on normal Earth gravity.
This article originally appeared on Engadget at https://www.engadget.com/htc-is-sending-vr-headsets-to-the-iss-to-help-cheer-up-lonely-astronauts-120019661.html?src=rss
NYU is launching a project to spur the development of immersive 3D video for dance education — and perhaps other areas. Boosted by a $1.2 million four-year grant from the National Science Foundation, the undertaking will try to make Point-Cloud Video (PCV) tech viable for streaming.
A point cloud is a set of data points in a 3D space representing the surface of a subject or environment. NYU says Point-Cloud Video, which strings together point-cloud frames into a moving scene, has been under development for the last decade. However, it’s typically too data-intensive for practical purposes, requiring bandwidth far beyond the capabilities of today’s connected devices.
The researchers plan to address those obstacles by “reducing bandwidth consumption and delivery latency, and increasing power consumption efficiency so that PCVs can be streamed far more easily,” according to an NYU Engineering blog post published Monday. Project leader Yong Liu, an NYU electrical and computer engineering professor, believes modern breakthroughs make that possible. “With recent advances in the key enabling technologies, we are now at the verge of completing the puzzle of teleporting holograms of real-world humans, creatures and objects through the global Internet,” Liu wrote on Monday.
ChatGPT maker OpenAI launched a model last year that can create 3D point clouds from text prompts. Engadget reached out to the project leader to clarify whether it or other generative AI tools are part of the process, and we’ll update this article if we hear back.
The team will test the technology with the NYU Tisch School of the Arts and the Mark Morris Dance Group’s Dance Center. Dancers from both organizations will perform on a volumetric capture stage. The team will stream their movements live and on-demand, offering educational content for aspiring dancers looking to study from high-level performers — and allowing engineers to test and tweak their PCV technology.
The researchers envision the work opening doors to more advanced VR and mixed reality streaming content. “The success of the proposed research will contribute towards wide deployment of high quality and robust PCV streaming systems that facilitate immersive augmented, virtual and mixed reality experience and create new opportunities in many domains, including education, business, healthcare and entertainment,” Liu said.
“Point-Cloud Video holds tremendous potential to transform a range of industries, and I’m excited that the research team at NYU Tandon prioritized dance education to reap those benefits early,” said Jelena Kovačević, NYU Tandon Dean.
This article originally appeared on Engadget at https://www.engadget.com/nyu-is-developing-3d-streaming-video-tech-with-the-help-of-its-dance-department-211947160.html?src=rss
The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right.
Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.
GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:
One plus one equals _____
Roses are red, violets are _____
You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.
Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.
GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:
If 3x + 1 = 3, then x equals _____
I am in my windowless basement, and I look toward the sky, and I see _____
He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____
I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____
Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.
We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.
I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):
If 3x + 1 = 3 , then x equals 1
I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.
He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!
I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.
All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.
What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.
Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:
Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?
1) Librarian
2) Construction worker
If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.
The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.
It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.
Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.
A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.
You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.
To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.
The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):
Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:
Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.
And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.
The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss
A sounding rocket toting a special imaging and spectroscopy instrument will take a brief trip to space Sunday night to try and capture as much data as it can on a long-admired supernova remnant in the Cygnus constellation. Its target, a massive cloud of dust and gas known as the Cygnus Loop or the Veil Nebula, was created after the explosive death of a star an estimated 20,000 years ago — and it’s still expanding.
NASA plans to launch the mission at 11:35 PM ET on Sunday October 29 from the White Sands Missile Range in New Mexico. The Integral Field Ultraviolet Spectroscopic Experiment, or INFUSE, will observe the Cygnus Loop for only a few minutes, capturing light in the far-ultraviolet wavelengths to illuminate gasses as hot as 90,000-540,000 degrees Fahrenheit. It’s expected to fly to an altitude of about 150 miles before parachuting back to Earth.
The Cygnus Loop sits about 2,600 light-years away, and was formed by the collapse of a star thought to be 20 times the size of our sun. Since the aftermath of the event is still playing out, with the cloud currently expanding at a rate of 930,000 miles per hour, it’s a good candidate for studying how supernovae affect the formation of new star systems. “Supernovae like the one that created the Cygnus Loop have a huge impact on how galaxies form,” said Brian Fleming, principal investigator for the INFUSE mission.
“INFUSE will observe how the supernova dumps energy into the Milky Way by catching light given off just as the blast wave crashes into pockets of cold gas floating around the galaxy,” Fleming said. Once INFUSE is back on the ground and its data has been collected, the team plans to fix it up and eventually launch it again.
This article originally appeared on Engadget at https://www.engadget.com/nasa-is-launching-a-rocket-on-sunday-to-study-a-20000-year-old-supernova-193009477.html?src=rss
Data from a meteorite impact on Mars that was recorded by NASA’s InSight lander in 2021 is now helping to clear up some confusion about the red planet’s interior makeup. A pair of studies published today in the journal Nature separately determined that Mars’ iron-rich core is smaller and denser than previous measurements suggested, and it’s surrounded by molten rock.
The now defunct InSight lander, which arrived on Mars in November 2018, spent four years recording seismic waves produced by marsquakes so scientists could get a better understanding of what’s going on beneath the planet’s surface. But, estimates of the Martian core based on InSight’s initial readings from nearby quakes didn’t quite add up. At the time, scientists found the core’s radius to be somewhere between 1118 and 1149 miles — much larger than expected — and contained a perplexingly high composition of lighter elements complementing its heavy liquid iron.
The numbers for those light elements were “bordering on the impossible,” said Dongyang Huang of ETH Zurich, a co-author of one of the studies. “We have been wondering about this result ever since.” Then, a breakthrough came when a meteorite struck Mars in September 2021 all the way across the planet from where InSight is positioned, generating seismic waves that ETH Zurich doctoral student Cecilia Duran said “allowed us to illuminate the core.”
IPGP/ CNES/ N. Starter
Based on those measurements, the two teams have found that Mars’ core more likely has a radius of about 1013-1060 miles. This, the ETH Zurich team notes, is about half the radius of Mars itself. A smaller core would also be more dense, meaning the previously inexplicable abundance of light elements may actually exist in smaller, more reasonable amounts. This is all surrounded by a layer of molten silicates about 90 miles thick, the teams found, which skewed the initial estimates. And, it’s unlike anything found in Earth’s interior.
According to Vedran Lekic from University of Maryland, a co-author of the second paper, the layer serves as somewhat of a “heating blanket” for the core that “concentrates radioactive elements.” Studying it could help scientists uncover answers about Mars’ formation and its lack of an active magnetic field.
This article originally appeared on Engadget at https://www.engadget.com/mars-core-looks-bigger-than-it-is-because-its-wrapped-in-radioactive-magma-211359695.html?src=rss
The moon has been a focal point for space research and exploration for years, yet we’re still far from fully understanding its origins. Take its age, for example – researchers have just discovered that the moon is about 40 million years older than previously thought.
In a study published by the European Association of Geochemistry, scientists looked at the age of crystal formations found in rock samples from the moon’s surface to determine its age. The prevalence of crystals called zircon in the samples, collected years ago from NASA’s Apollo program, suggests that the surface of the moon was created around 110 million years after the formation of the solar system. The scientists used analytical techniques including mass spectrometry to measure the presence of particular molecules in the rock. Another method of analysis, atom-probe tomography, was used to detect the amount of radioactive decay in the samples — which in turn was used to determine the age of the crystals in the rock.
NASA holds a theory that a Mars-sized object collided with Earth several billion years ago to form the moon. This new understanding of the age of the moon actually gives scientists a rough idea of when that collision might have occurred. This finding highlights the importance of exploratory missions like the Apollo 17 mission at the heart of this discovery. The 1972 manned mission to geologically survey the surface of the moon resulted in 243 pounds of lunar material being brought back to Earth — only for it to be examined by researchers 51 years later.
To date, NASA says that more than 105 robotic spacecraft have been launched to explore the moon, so the opportunities for more findings are boundless. Although the next NASA-led manned mission to the moon won't happen until 2025 at the earliest, we can expect more rover programs to shed more light on the makings of the surface of the moon.
This article originally appeared on Engadget at https://www.engadget.com/lunar-rock-samples-suggest-moon-is-older-than-previously-thought-193036846.html?src=rss
NASA has revealed that it has already processed 70.3 grams of rocks and dust collected by the OSIRIS-REx mission from asteroid Bennu. That means the mission has way exceeded its goal of bringing 60 grams of asteroid samples back to Earth — especially since NASA scientists have yet to open the primary sample container that made its way back to our planet in September. Apparently, they're struggling to open the mission's Touch-and-Go Sample Acquisition Mechanism (TAGSAM) and could not remove two of its 35 fasteners using the tools currently available to them.
The scientists are processing the samples inside a specialized glovebox (pictured above) with a flow of nitrogen in order to keep them from being exposed to our atmosphere and any contaminants. They can't just use any implement to break the container's fasteners open either: The tool must fit inside the glovebox, and it also must not compromise the samples' integrity. NASA has sealed the primary container sample for now, while it's developing the procedure to be able to open it over the next few weeks.
If you're wondering where the 70.3 grams of rocks and dust came from, well, NASA collected part of it from the external sample receptacle but outside TAGSAM itself. It also includes a small portion of the samples inside TAGSAM, taken by holding down its mylar flap and reaching inside with tweezers or a scoop. NASA's initial analysis of the material published earlier this month said it showed evidence of high carbon content and water, and further studies could help us understand how life on Earth began. The agency plans to continue analyzing and "characterizing" the rocks and dust it has already taken from the sample container, so we may hear more details about the samples even while TAGSAM remains sealed.
This article originally appeared on Engadget at https://www.engadget.com/nasas-osiris-rex-mission-collected-more-bennu-asteroid-samples-than-first-thought-083605172.html?src=rss
SpaceX has struck a deal with the European Space Agency (ESA) to launch four of Europe's Galileo navigation satellites into orbit using its Falcon 9 rocket, The Wall Street Journal has reported. It'll be the first time Elon Musk's company has launched any EU satellites containing classified equipment.
The ESA had planned to launch Galileo satellites using its homegrown Ariane 6 rocket, but the latter has seen frequent delays and isn't expected to make its inaugural launch until 2024 at the earliest. The deal is still subject to final approval by the EU Commission and member states, according to ESA director of navigation Javier Benedicto.
SpaceX would launch the satellites from US territory, according to the terms of the deal. It would mark the first time Galileo equipment has been carried into orbit outside of European territory, barring early test versions launched from Kazakhstan. All other Galileo satellites have launched from the Guiana Space Center in Kourou, French Guiana — using Soyuz rockets at first and the Ariane 5 system later on.
News of the deal isn't a big surprise, as it was reported this summer that Europe was seeking to cut a deal with SpaceX and United Launch Alliance to "exceptionally launch Galileo satellites." Another alternative would have been Russian-built Soyuz rockets, but that was off the table due to EU sanctions against Russia over its invasion of Ukraine.
Ariane 6 was originally slated to launch in 2023, but multiple delays have pushed the first launch back to 2024. Recently, a short hotfire of the Vulcain 2.1 engine was delayed, and a long-duration static-fire test was pushed back from early October to late November. The Ariane 5 rocket is no longer an option, as it was retired after its final launch in July.
SpaceX's launched Europe's Euclid telescope in July, and is slated to launch two other EU spacecraft in the near future. As it stands, the ESA only plans to make four Galileo launches using the Falcon 9. Musk himself has had a tenuous relationship with the EU — most recently, a top European Union official is warned him about the spread of misinformation on his social network platform X amid the Israel-Hamas war.
The Galileo system is key for Europe, as it makes it independent from the US Global Positioning System (GPS) and satnav systems from Russia and China. It's also used by EU military and security services to transmit encrypted messages. The service went live in 2016, but additional satellites are required to bolster the existing network. "It is a matter of robustness," said Benedicto. "We have 10 satellites that are ready to be launched, and those satellites should be in space, not on the ground."
This article originally appeared on Engadget at https://www.engadget.com/spacex-will-launch-esa-navigation-satellites-amid-delays-with-the-eus-own-rockets-140030424.html?src=rss
Space isn't hard only on account of the rocket science. The task of taking a NASA mission from development and funding through construction and launch — all before we even use the thing for science — can span decades. Entire careers have been spent putting a single satellite into space. Nobel-winning NASA physicist John Mather, mind you, has already helped send up two.
In their new book, Inside the Star Factory: The Creation of the James Webb Space Telescope, NASA's Largest and Most Powerful Space Observatory, author Christopher Wanjek and photographer Chris Gunn take readers on a behind the scenes tour of the James Webb Space Telescope's own journey from inception to orbit. Weaving examinations of the radical imaging technology that enables us to peer deeper into the early universe than ever before with profiles of the researchers, advisors, managers, engineers and technicians that made it possible through three decades of effort. In this week's Hitting the Books excerpt, a look at JWST project scientist John Mather and his own improbable journey from rural New Jersey to NASA.
John Mather is a patient man. His 2006 Nobel Prize in Physics was thirty years in the making. That award, for unswerving evidence of the Big Bang, was based on a bus-sized machine called COBE — yet another NASA mission that almost didn’t happen. Design drama? Been there. Navigate unforeseen delays? Done that. For NASA to choose Mather as JWST Project Scientist was pure prescience.
Like Webb, COBE — the Cosmic Background Explorer — was to be a time machine to reveal a snapshot of the early universe. The target era was just 370,000 years after the Big Bang, when the universe was still a fog of elementary particles with no discernable structure. This is called the epoch of recombination, when the hot universe cooled to a point to allow protons to bind with electrons to form the very first atoms, mostly hydrogen with a sprinkling of helium and lithium. As the atoms formed, the fog lifted, and the universe became clear. Light broke through. That ancient light, from the Big Bang itself, is with us today as remnant microwave radiation called the cosmic microwave background.
Tall but never imposing, demanding but never mean, Mather is a study in contrasts. His childhood was spent just a mile from the Appalachian Trail in rural Sussex County, New Jersey, where his friends were consumed by earthly matters such as farm chores. Yet Mather, whose father was a specialist in animal husbandry and statistics, was more intrigued by science and math. At age six he grasped the concept of infinity when he filled up a page in his notebook with a very large number and realized he could go on forever. He loaded himself up with books from a mobile library that visited the farms every couple of weeks. His dad worked for Rutgers University Agriculture Experiment Station and had a laboratory on the farm with radioisotope equipment for studying metabolism and liquid nitrogen tanks with frozen bull semen. His dad also was one of the earliest users of computers in the area, circa 1960, maintaining milk production records of 10,000 cows on punched IBM cards. His mother, an elementary school teacher, was quite learned, as well, and fostered young John’s interest in science.
A chance for some warm, year-round weather ultimately brought Mather in 1968 to University of California, Berkeley, for graduate studies in physics. He would fall in with a crowd intrigued by the newly detected cosmic microwave background, discovered by accident in 1965 by radio astronomers Arno Penzias and Robert Wilson. His thesis advisor devised a balloon experiment to measure the spectrum, or color, of this radiation to see if it really came from the Big Bang. (It does.) The next obvious thing was to make a map of this light to see, as theory suggested, whether the temperature varied ever so slightly across the sky. And years later, that’s just what he and his COBE team found: anisotropy, an unequal distribution of energy. These micro-degree temperature fluctuations imply matter density fluctuations, sufficient to stop the expansion, at least locally. Through the influence of gravity, matter would pool into cosmic lakes to form stars and galaxies hundreds of millions of years later. In essence, Mather and his team captured a sonogram of the infant universe.
Yet the COBE mission, like Webb, was plagued with setbacks. Mather and the team proposed the mission concept (for a second time) in 1976. NASA accepted the proposal but, that year, declared that this satellite and most others from then on would be delivered to orbit by the Space Shuttle, which itself was still in development. History would reveal the foolishness of such a plan. Mather understood immediately. This wedded the design of COBE to the cargo bay of the unbuilt Shuttle. Engineers would need to meet precise mass and volume requirements of a vessel not yet flown. More troublesome, COBE required a polar orbit, difficult for the Space Shuttle to deliver. The COBE team was next saddled with budget cuts and compromises in COBE’s design as a result of cost overruns of another pioneering space science mission, the Infrared Astronomical Satellite, or IRAS. Still, the tedious work continued of designing instruments sensitive enough to detect variations of temperatures just a few degrees above absolute zero, about −270°C. From 1980 onward, Mather was consumed by the creation of COBE all day every day. The team needed to cut corners and make risky decisions to stay within budget. News came that COBE was to be launched on the Space Shuttle mission STS-82-B in 1988 from Vandenberg Air Force Base. All systems go.
Then the Space Shuttle Challenger exploded in 1986, killing all seven of its crew. NASA grounded Shuttle flights indefinitely. COBE, now locked to Shuttle specifications, couldn’t launch on just any other rocket system. COBE was too large for a Delta rocket at this point; ironically, Mather had the Delta in mind in his first sketch in 1974. The team looked to Europe for a launch vehicle, but this was hardly an option for NASA. Instead, the project managers led a redesign to shave off hundreds of pounds, to slim down to a 5,000-pound launch mass, with fuel, which would just make it within the limits of a Delta by a few pounds. Oh, and McDonnell Douglas had to build a Delta rocket from spare parts, having been forced to discontinue the series in favor of the Space Shuttle.
The team worked around the clock over the next two years. The final design challenge was ... wait for it ... a sunshield that now needed to be folded into the rocket and spring-released once in orbit, a novel approach. COBE got the greenlight to launch from Vandenberg Air Force Base in California, the originally desired site because it would provide easier access to a polar orbit compared to launching a Shuttle from Florida. Launch was set for November 1989. COBE was delivered several months before.
Then, on October 17, the California ground shook hard. A 6.9-magnitude earthquake struck Santa Cruz County, causing widespread damage to structures. Vandenberg, some 200 miles south, felt the jolt. As pure luck would have it, COBE was securely fastened only because two of the engineers minding it secured it that day before going off to get married. The instrument suffered no damage and launched successfully on November 18. More drama came with the high winds on launch day. Myriad worries followed in the first weeks of operation: the cryostat cooled too quickly; sunlight reflecting off of Antarctic ice played havoc with the power system; trapped electrons and protons in the Van Allen belts disrupted the functioning of the electronics; and so on.
All the delays, all the drama, faded into a distant memory for Mather as the results of the COBE experiment came in. Data would take four years to compile. But the results were mind-blowing. The first result came weeks after launch, when Mather showed the spectrum to the American Astronomical Society and received a standing ovation. The Big Bang was safe as a theory. Two years later, at an April 1992 meeting of the American Physical Society, the team showed their first map. Data matched theory perfectly. This was the afterglow of the Big Bang revealing the seeds that would grow into stars and galaxies. Physicist Stephen Hawking called it “the most important discovery of the century, if not of all time.”
Mather spoke humbly of the discovery at his Nobel acceptance speech in 2006, fully crediting his remarkable team and his colleague George Smoot, who shared the prize with him that year. But he didn’t downplay the achievement. He noted that he was thrilled with the now broader “recognition that our work was as important as people in the professional astronomy world have known for so long.”
Mather maintains that realism today. While concerned about delays, threats of cancellation, cost overruns, and not-too-subtle animosity in the broader science community over the “telescope that ate astronomy,” he didn’t let this consume him or his team. “There’s no point in trying to manage other people’s feelings,” he said. “Quite a lot of the community opinion is, ‘well, if it were my nickel, I’d spend it differently.’ But it isn’t their nickel; and the reason why we have the nickel in the first place is because NASA takes on incredibly great challenges. Congress approved of us taking on great challenges. And great challenges aren’t free. My feeling is that the only reason why we have an astronomy program at NASA for anyone to enjoy — or complain about — is that we do astonishingly difficult projects. We are pushing to the edge of what is possible.”
Webb isn’t just a little better than the Hubble Space Telescope, Mather added; it’s a hundred times more powerful. Yet his biggest worry through mission design was not the advanced astronomy instruments but rather the massive sunshield, which needed to unfold. All instruments and all the deployment mechanisms had redundancy engineered into them; there are two or more ways to make them work if the primary method fails. But that’s not the only issue with a sunshield. It would either work or not work.
Now Mather can focus completely on the science to be had. He expects surprises; he’d be surprised if there were no surprises. “Just about everything in astronomy comes as a surprise,” he said. “When you have new equipment, you will get a surprise.” His hunch is that Webb might reveal something weird about the early universe, perhaps an abundance of short-lived objects never before seen that say something about dark energy, the mysterious force that seems to be accelerating the expansion of the universe, or the equally mysterious dark matter. He also can’t wait until Webb turns its cameras to Alpha Centauri, the closest star system to Earth. What if there’s a planet there suitable for life? Webb should have the sensitivity to detect molecules in its atmosphere, if present.
“That would be cool,” Mather said. Hints of life from the closest star system? Yes, cool, indeed.
This article originally appeared on Engadget at https://www.engadget.com/inside-the-star-factory-chris-gunn-christopher-wanjek-mit-press-143046496.html?src=rss