Posts with «author_name|andrew tarantola» label

Meta is spinning off the Pytorch framework into its own AI research foundation

In 2016, Meta (then but a simple country Facebook) launched its open-source AI research library, the Pytorch framework. Six years and 150,000 projects from 2,400 contributors later, Meta announced on Monday that the Pytorch project will soon spin out from the company’s direct control to become its own entity, the Pytorch Foundation, a subsidiary within the larger Linux Foundation nonprofit hegemony.

Over the last half decade, Pytorch has grown to become a leading standard for the AI research community with Meta CEO Mark Zuckerberg noting in Monday’s press release that some 80 percent of “researchers who submit their work at major ML conferences, such as NeurIPS or ICML, harness the framework.”

“We have built libraries that support some of the principal domains of the AI field, such as torchvision, which powers most of the world’s modern computer vision research,” Zuckerberg continued. “The framework will continue to be a part of Meta’s AI research and engineering work.”

But Pytorch isn’t just Meta’s baby, it serves as a technological underpinning to much of Amazon’s Web Services work as well as Microsoft Azure and OpenAI. As such, the Pytorch Foundation, “will boast a wide-ranging governing board composed of representatives from AMD, Amazon Web Services, Google Cloud, Meta, Microsoft Azure, and Nvidia, with the intention to expand further over time.” And to ensure that the fledgling foundation does not lose sight of the values that it embodies, the new organization will adhere to four principles of “remaining open, maintaining neutral branding, staying fair, and forging a strong technical identity.” Apparently “don’t be evil” was already taken.

Despite being freed of direct oversight, Meta intends to continue employing Pytorch as its primary AI research platform and financially support it accordingly. Zuckerberg did note however, that the company plans to maintain “a clear separation between the business and technical governance” of the foundation.

Hitting the Books: How to uncover the true nature of the multiverse

It's difficult to describe the state of the universe's affairs back when the whole of everything was compressed to a size slightly smaller than the period at the end of this sentence — on account that the concepts of time and space literally didn't yet apply. But that challenge hasn't stopped pioneering theoretical astrophysicist, Dr. Laura Mersini-Houghton, from seeking knowledge at the edge of the known universe and beyond. In her new book, Before the Big Bang, Mersini-Houghton recounts her early life in communist Albania, her career as she rose to prominence in the male-dominated field of astrophysics and discusses her research into the multiverse which could fundamentally rewrite our understanding of reality.

Mariner Books

Excerpted from Before The Big Bang: The Origin of the Universe and What Lies Beyond by Laura Mersini-Houghton. Published by Mariner Books. Copyright © 2022 by Laura Mersini-Houghton. All rights reserved.


Scientific investigations of problems like the creation of the universe, which we can neither observe nor reproduce and test in a lab, are similar to detective work in that they rely on intuition as well as evidence. Like a detective, as pieces of the puzzle start falling into place, researchers can intuitively sense the answer is close. This was the feeling I had as Rich and I tried to figure out how we could test our theory about the multiverse. Rationally, it seemed like a long shot, but intuitively, it seemed achievable.

Finally, a potential solution hit me. I realized that the key to testing and validating this theory was hidden in quantum entanglement — because decoherence and entanglement were two sides of the same coin! I could rewind the creation story all the way back to its quantum-landscape roots, when our wave-universe was entangled with others.

I already knew that the separation — the decoherence — of the branches of the wave function of the universe (which then become individual universes) was triggered by their entanglement with the environmental bath of fluctuations. Now I wondered if we could calculate and find any traces of this early entanglement imprinted on our sky today.

This might sound like a contradiction. How could our universe possibly still be entangled with all the other universes all these eons after the Big Bang? Our universe must have separated from them in its quantum infancy. But as I wrestled with these issues, I realized that it was possible to have a universe that had long since decohered but that also retained its infantile “dents” — minor changes in shape caused by the interaction with other surviving universes that had been entangled with ours during the earliest moments — as identifiable birthmarks. The scars of its initial entanglement should still be observable in our universe today.

The key was in the timing. Our wave-universe was decohering around the same time as the next stage, the particle universe, was going through its own cosmic inflation and coming into existence. Everything we observe in our sky today was seeded from the primordial fluctuations produced in those first moments, which take place at the smallest of units of measurable time, far less than a second. In principle, during those moments, as entanglement was being wiped out, its signatures could have been stamped on the inflaton and its fluctuations. There was a chance that the sort of scars that I was imagining had formed during this brief period. And if they had, they should be visible in the skies.

Understanding how scars formed from entanglement is less complicated than you might imagine. I started by trying to create a mental picture of the entanglement’s scarring of our sky. I visualized all the surviving universes from the branches of the wave function of the universe, including ours, as a bunch of particles spread around the quantum multiverse. Because they all contain mass and energy, they interact with (pull on) one another gravitationally, just as Newton’s apple had its path of motion curved by interacting with the Earth’s mass, thus guiding it to the ground. However, the apple was also being pulled on by the moon, the sun, all the other planets in our solar system, and all the stars in the universe. The Earth’s mass has the strongest force, but that does not mean these other forces do not exist. The net effect that entanglement left on our sky is captured by the combined pulling on our universe by other infant universes. Similar to the weak pulling from stars on the famous apple, at present, the signs of entanglement in our universe are incredibly small relative to the signs from cosmic inflation. But they are still there!

I will admit it... I was excited by the mere thought that I potentially had a way to glimpse beyond our horizon and before the Big Bang! Through my proposal of calculating and tracking entanglement in our sky, I may very well have pinned down, for the very first time, a way of testing the multiverse. What thrilled me most about this idea was its potential for making possible what for centuries we thought was impossible — an observational window to glimpse in space and in time beyond our universe into the multiverse. Our expanding universe provides the best cosmic laboratory for hunting down information about its infancy because everything we observe at large scales in our universe today was also present at its beginning. The basic elements of our universe do not vanish over time; they simply rescale their size with the expansion of the universe.

And here is why I thought of using quantum entanglement as the litmus test for our theory: Quantum theory contains a near-sacred principle known as “unitarity,” which states that no information about a system can ever be lost. Unitarity is a law of information conservation. It means that signs of the earlier quantum entanglement of our universe with the other surviving universes must still exist today. Thus, despite decoherence, entanglement can never be wiped from our universe’s memory; it is stored in its original DNA. Moreover, these signs have been encoded in our sky since its infancy, since the time the universe started as a wave on the landscape. Traces of this earlier entanglement would simply stretch out with the expansion of the universe as the universe became a much larger version of its infant self.

I was concerned that these signatures, which have been stretched by inflation and the expansion of the universe, would be quite weak. But on the basis of unitarity, I believed that however weak they were, they were preserved somewhere in our sky in the form of local violations or deviations from uniformity and homogeneity predicted by cosmic inflation.

Rich and I decided to calculate the effect of quantum entanglement on our universe to find out if any traces were left behind, then fast-forward them from infancy to the present and derive predictions for what kind of scars we should be looking for in our sky. If we could identify where we needed to look for them, we could test them by comparing them with actual observations.

Rich and I started on this investigation with help from a physicist in Tokyo, Tomo Takahashi. I first got to know Tomo at UNC Chapel Hill in 2004 when we overlapped by one year. He was a postdoc about to take a faculty position in Japan, and I had just arrived at UNC. We enjoyed interacting, and I saw the high standards Tomo maintained for his work and his incredible attention to detail. I knew he was familiar with the computer simulation program that we needed in order to compare the predictions based on our theory with actual data about matter and radiation signatures in the universe. In 2005, I called Tomo, and he agreed to collaborate with us.

Rich, Tomo, and I decided that the best place to begin our search was in the CMB — cosmic microwave background, the afterglow from the Big Bang. CMB is the oldest light in the universe, a universal “ether” permeating the entire cosmos throughout its history. As such, it contains a sort of exclusive record of the first millisecond in the life of the universe. And this silent witness of creation is still all around us today, making it an invaluable cosmic lab.

The energy of the CMB photons in our present universe is quite low; their frequencies peak around the microwave range (160 gigahertz), much like the photons in your kitchen microwave when you warm your food. Three major international scientific experiments — the COBE, WMAP, and Planck satellites (with a fourth one on the way), dating from the 1990s to the present — have measured the CMB and its much weaker fluctuations to exquisite precision. We even encounter CMB photons here on Earth. Indeed, seeing and hearing CMB used to be an everyday experience in the era of old TV sets: when changing channels, the viewer would experience the CMB signal in the form of static — the blurry, buzzing gray and white specks that appeared on the TV screen.

But if our universe started purely from energy, what can we see in the CMB photons that gives us a nascent image of the universe? Here, quantum theory, specifically Heisenberg’s uncertainty principle, provides the answer. According to the uncertainly principle, quantum uncertainty, displayed as fluctuations in the initial energy of inflation, is unavoidable. When the universe stops inflating, it is suddenly filled with waves of quantum fluctuations of the inflaton energy. The whole range of fluctuations, some with mass and some without, are known as density perturbations. The shorter waves in this spectrum, those that fit inside the universe, become photons or particles, depending on their mass (reflecting the phenomenon of wave-particle duality).

The tiny tremors in the fabric of the universe that induce weak ripples or vibrations in the gravitational field, what are known as primordial gravitational waves, hold information on what particular model of inflation took place. They are incredibly small, at one part in about ten billion of the strength of the CMB spectrum, and therefore are much harder to observe. But they are preserved in the CMB.

Ford updates its BlueCruise driver assist with hands-free lane changing and more

BlueCruise, Ford's intelligent adaptive cruise control system, already offers drivers a number of assistive features such as lane centering, street sign recognition and hands-free highway driving along more than 130,000 miles of US roadways. On Thursday, Ford announced that it will begin releasing the program's version 1.2 update later this Fall beginning with the 2023 Mach-E and will include new assists like hands-free lane changing, in-lane repositioning, and predictive speed assist. The same additions are also coming to Lincoln owners with the release of ActiveGlide 1.2 (Lincoln's reskinned version of BlueCruise). 

Ford

Hands-free lane changing is what it sounds like, just tap the turn signal and the vehicle will slide over one when it's safe to do so. It'll also preemptively suggest changes if it sees you coming up on a slower vehicle. Predictive speed assist is built to spot sharp turns ahead and safely guide the vehicle through them, while in-lane repositioning will cause the vehicle to hug either line so as to provide some additional padding between your bumper and the semi in the next lane.

The new features will first appear with the 2023 Ford Mustang Mach-E Select and 2023 Lincoln Navigator Standard. Owners will also need to have the Ford Co-Pilot360 system installed and subscribe to a $600 three-year service plan, which keeps the hands-free driving maps up to date. They'll also have to have either the FordPass or Lincoln Way companion app installed on their phone. 2023 Mach-E and Navigator owners will receive the first three months of service free as an introductory demo.

Twitter finally gets around to adding direct Insta and Snap sharing to its Android app

Why screencap when you can simply share? Twitter Support announced on Thursday that its Android app will soon receive the same functionality that its iOS alternative already enjoys: the ability to share tweets directly on Instagram or Snapchat.

We enjoyed the Tweet. Now everyone should enjoy it too.

Sharing a Tweet directly to Snapchat and Instagram Stories is now available on Android (already on iOS!)

And we added LinkedIn sharing on Android and iOS. Tap the share icon on a Tweet to try it out.

— Twitter Support (@TwitterSupport) September 8, 2022

What's more, Twitter is adding LinkedIn direct sharing to both Android and iOS so your echo chamber will be able to bounce around through three separate social media silos. Twitter is also working to increase its cross-platform reach in India, where TechCrunch reports that the social media company is already testing out a "Share to Whatsapp" button for users in that market. 

Jeep announces plans to bring four new EV models to market by 2025

In what has become a much less shocking announcement in the automotive industry over the past few years, Jeep revealed on Thursday its plan to release four new EV models in the US and Europe by 2025 as the company seeks to "become the leading electrified SUV brand in the world." Furthermore, Jeep has set a goal for 50 percent of US sales and 100 percent of EU sales to be battery electrics (BEVs) by 2030.

Fiat Chrysler

The new model lineup expands upon the success of the Wrangler 4xe plug-in hybrid and recently announced Grand Cherokee 4xe. They'll include a new Recon and Wagoneer, both of which were first unveiled during Thursday's livestream and will available in both North American and European markets, as well as an Avenger EV coming to Europe "early next year," per Jeep PR.

The Recon will make its public debut next year with production expected to begin in 2024. Reservations for that model open early 2023. The Wagoneer will be an entirely new take on a stalwart Jeep model with the company reportedly "targeting a range of 400 miles on a single charge, 600 hp and a 0-60 mph time of around 3.5 seconds." It too will be open for reservation early next year. Complimenting the Wagoneer's midsize bulk will be the compact Avenger SUV, though it won't be arriving stateside to start. Instead it'll be marketed to Europe and Asia, offering a targeted range of 400km. It will debut publicly in October and should hit dealer show floors by second quarter 2023.

iOS 16 will be available on September 12th

Today's iPhone 2022 event was chock full of marquee reveals with the new iPhone 14 and 14 Pro — not to mention the new Watch Ultra line. But tucked away amidst the news torrenting out of Cupertino, Apple announced on Wednesday that iOS 16, which all this new hardware runs, will be available as a free download beginning September 12th. Not everybody will be eligible to upgrade however.

Among the updated operating system's new features is a redesigned lock screen focused on "communication, sharing, and intelligence" with more expansive wallpaper options, enhanced messaging capabilities and photo sharing, and improved Live Text performance. iOS 16 will also be the first to offer Apple's new Emergency SOS service, which will enable folks stranded in the backcountry to contact emergency responders via low bandwidth satellite communications. That service will be free for the first two years, though the company has not yet released pricing for following the introductory period.   

If you have an iPhone 7 or older, you will not have access to the new OS, unfortunately. iPhones 8 and newer, up to today's announced iPhone 14 and 14 Pro, will be notified once the update is pushed live.  

Hitting the Books: Newfangled oceanographers helped win WWII using marine science

Lethal Tides tells the story of pioneering oceanic researcher Mary Sears and her leading role in creating one of the most important intelligence gathering operations of World War II. Languishing in academic obscurity and roundly ignored by her male colleagues, Sears is selected for command by the godfather of climate change, Roger Revelle, and put in charge of the Oceanographic Unit of the Navy Hydrographic Office. She and her team of researchers are tasked with helping make the Navy's atoll-hopping campaign in the Pacific a reality through ocean current analysis, mapping for bioluminescence fields and deep-water crevasses that could reveal or conceal US subs from the enemy, and cartographing the shore and surf conditions of the Pacific Islands and Japan itself.  

Harper Collins

From Lethal Tides by Catherine Musemeche. Copyright © 2022 by Catherine Musemeche. Reprinted by permission of William Morrow, an imprint of HarperCollins Publishers.


— Washington, D.C., 1943 —

Four months into her job at the Oceanographic Unit, Sears had learned a lot about what the military needed from oceanographers. She had learned it from meeting with Roger Revelle and his cohorts on the Joint Chiefs Subcommittee on Oceanography where she listened to concerns about what the navy was lacking and took detailed notes. She had learned it from answering requests from every branch of the military for tidal data, wave forecasts, and currents to support tactical operations overseas. She had learned it from gathering all the known references on drift and drafting an urgently needed manual to help locate men lost at sea. The more she took in, the more she understood exactly how dire the lack of oceanographic intelligence was and how it could undermine military operations. And now she was going to have to do something about it.

Sears was no longer at Woods Hole, where she had been sidelined by her male colleagues who sailed on Atlantis and collected her specimens while she stayed onshore. For the first time in her life, she was in charge. It was now her responsibility to set up and direct the operations of an oceanographic intelligence unit researching vital questions that impacted the war. She had never been asked to set agendas, call meetings, or give people orders, much less make sure they carried them out, but she was going to have to do those things to get the military the information they needed to win the war. She was going to have to take the lead.

To assume the role of leader, Sears would need to push through her innate reserved tendencies and any thoughts racing around in her head that screamed you don’t belong here. Taking charge of a team of oceanographers did not come naturally to a bench scientist who worked alone all day staring into a microscope, especially if that scientist was a woman, but Sears had learned from watching Revelle. He had started as an academic in a tweed jacket with elbow patches, but when the navy made him a lieutenant he took on the persona of “the man in charge.”

When Revelle walked into the conference room of the Munitions Building—tall, broad shouldered, and uniformed—he was in complete control. He spoke in a booming, decisive voice. He had an answer for every question. He solved problems. Now, thanks to the overly confident Revelle, Sears was wearing the uniform too. She had stepped into his shoes at the Hydrographic Office. She was not going to let anyone think she couldn’t fill them.

During the first year of the war there had been a mad scramble in Washington to gather information about the countries where troops might be fighting, especially distant locales like New Guinea, Indochina, Formosa, and all the tiny islands dotting the sixty-four million square miles of the Pacific Ocean. World War II spilled across the globe into places most Americans had never heard of and where the military had never been. It was unlike any other war Americans had fought.

Getting to these places would be the easy part. The navy could navigate its way to just about any far-off target anywhere in the world, thanks to the nautical charts maintained by the Hydrographic Office, but what would it find when it got there? Were the beaches flat and wide or would they be narrow, steep, and difficult to land on? Was the terrain mountainous, volcanic, or swampy? Would high winds and waves impede a smooth landing? Would they land during the rainy season? Who were the native people and what language did they speak? Were there drivable roads once troops got across the beaches?

All these details mattered because going to war was more than hauling men, tanks, rifles, and ammunition to a designated site and attacking the enemy. The troops needed to come prepared for whatever they might find, which meant knowing everything they could about an area in advance.

The military searched their files for background materials. They found spotty reports scattered among files of government agencies but no comprehensive references that spanned the globe and nothing that left them with a sense of what to expect when they went to war. The years between World War I and World War II stretched across the lean budgets of the Depression years. The military had languished along with the rest of the country—training soldiers with Springfield rifles manufactured in 1903 and using borrowed cruise liners to transport troops. With Congress keeping the purse strings tight, there had been no money to spend gathering intel for wars that might pop up one day in some remote corner of the world. The file cabinets were all but empty. As one intelligence official summed it up, “We were caught so utterly unprepared.”

What would the armed forces do now to catch up in the midst of an ongoing war?

It was a problem that had vexed Roosevelt even before the war. To help remedy the intelligence gap, he had appointed General William Donovan in mid-1941 as coordinator of information, a role that morphed into the director of the Office of Strategic Services (OSS) during World War II. But Donovan too was getting a late start, and his mission was focused on espionage and sabotage, not foreign terrain.

The logical source of information for the military was its own intelligence agencies. The Office of Naval Intelligence (ONI), the Office of Strategic Services (OSS), the Army Corps of Engineers, and G-2, the army’s intelligence unit, had all started spinning out their own internal intelligence reports, duplicating effort and expense. But like jealous siblings guarding their toys, the agencies kept their reports to themselves, which only hampered preparations in the long run. Furthermore, these groups had not anticipated the massive landscape this war would cover and there were still many gaps to fill.

“Who would have thought, when Germany marched on Poland, that we would suddenly have to range our inquiries from the cryolite mines of Ivigtut, Greenland, to the guayule plants of Yucatan, Mexico; or from the twilight settlements of Kiska to the coral beaches of Guadalcanal. Who even thought we should be required to know (or indeed suspected that we did not know) everything about the beaches of France and the tides and currents of the English Channel,” a CIA official later mused.

That was exactly the problem: there was no predicting just what information might be needed in a war of global proportions. Whether it was knowing where to collect an essential mineral or finding the latest tidal data, the need for information, beyond just estimating enemy troop strength or weaponry, was enormous. The military leaders trying to plan the war—where to send troops first and what operations to execute when they got there—were particularly hindered. Their information needs were unfolding in real time, and without a centralized forum for gathering, collating, analyzing, and disseminating information, the United States found itself at a disadvantage in war planning.

Roosevelt began to realize the extent of the problem when he started meeting with Churchill and the British Chiefs of Staff in a series of war planning conferences. At the Arcadia Conference held two weeks after World War II began the British had the edge in strategic planning. They had operated under a system for almost two decades where the British Chiefs of Staff served as a supreme, unified command, reaping the benefits of cooperation between the Admiralty and the British Army. The United States had no such corresponding body.

Weeks after the first conference Roosevelt formed his own Joint Chiefs of Staff, a unified, high command in the United States composed of Admiral William D. Leahy, the president’s special military adviser; General George C. Marshall, chief of staff of the army; Admiral Ernest J. King, chief of Naval Operations and commander in chief of the U.S. Fleet; and General Henry H. Arnold, deputy army chief of staff for air and chief of the Army Air Corps. This impressive array of leaders could draw up battle plans, but it would take time to turn themselves into a truly cooperative body.

At the next war planning conference, at Casablanca in January 1943, Roosevelt noticed yet another fault in the American war planning apparatus—the information gap between the British and the Americans. No matter what subject came up in any corner of the world, the British had prepared a detailed analysis on the area at issue and pulled those reports out of their briefcases. The Americans weren’t able to produce a single study that could match the quality of the British reports, a failing that frustrated and embarrassed the president.

“We came, we listened and we were conquered,” Brigadier General Albert C. Wedemeyer, the army’s chief planner, shared with a colleague following the Casablanca Conference. “They had us on the defensive practically all the time.”

The British had a two-year start on the Americans in this war and they had learned the hard way about the need to collect reliable topographic intelligence. During the German invasion of Norway in 1940 the Royal Air Force Bomber Command had been forced to rely on a 1912 edition of a Baedeker’s travel guide for tourists as the sole reference in planning a counterattack. In the same offensive, the Royal Navy had only scanty Admiralty charts to guide an attack on a major port, an intelligence deficiency that could have easily doomed the mission. The British had gotten away with one in their Norway mission, but they knew they had to do better.

So they had formed the Interservices Topographical Department to implement the pooling of topographical intelligence generated by the army, navy, and the Allies, and tasked it with preparing reports in advance of overseas military operations. This was where Churchill’s reports came from and why his aides could pull them out of their briefcases when the most sensitive joint operations were being planned. To be on an equal footing with the British, the Americans needed to be able to do the same, which meant they were going to have to find a way to rectify the lack of information and fast.operations were being planned. To be on an equal footing with the British, the Americans needed to be able to do the same, which meant they were going to have to find a way to rectify the lack of information and fast.

Hitting the Books: How hurricanes work

Hurricane season is currently in full swing across the Gulf Coast and Eastern Seaboard. Following a disconcertingly quiet start in June, meteorologists still expect a busier-than-usual stretch before the windy weather (hopefully) winds down at the end of November. Meteorologists like Matthew Cappucci who, in his new book, Looking Up: The True Adventures of a Storm-Chasing Weather Nerd, recounts his career as a storm chaser — from childhood obsession to adulthood obsession as a means of gainful employment. In the excerpt below, Cappucci explains the inner workings of tropical storms.

Simon and Schuster

Excerpted from Looking Up: The True Adventures of a Storm-Chasing Weather Nerd by Matthew Cappucci. Published by Pegasus Books. Copyright © 2022 by Matthew Cappucci. All rights reserved.


Hurricanes are heat engines. They derive their fury from warm ocean waters in the tropics, where sea surface temperatures routinely hover in the mid- to upper-eighties between July and October. Hurricanes and tropical storms fall under the umbrella of tropical cyclones. They can be catastrophic, but they have a purpose—some scholars estimate they’re responsible for as much as 10 percent of the Earth’s annual equator-to-pole heat transport.

Hurricanes are different from mid-latitude systems. So-called extratropical, or nontropical, storms depend upon variations in air temperature and density to form, and feed off of changing winds. Hurricanes require a calm environment with gentle upper-level winds and a nearly uniform temperature field. Ironic as it may sound, the planet’s worst windstorms are born out of an abundance of tranquility.

The first ingredient is a tropical wave, or clump of thunderstorms. Early in hurricane season, tropical waves can spin up on the tail end of cold fronts surging off the East Coast. During the heart of hurricane season in August and September, they commonly materialize off the coast of Africa in the Atlantic’s Main Development Region. By October and November, sneaky homegrown threats can surreptitiously gel in the Gulf of Mexico or Caribbean.

Every individual thunderstorm cell within a tropical wave has an updraft and a downdraft. The downward rush of cool air collapsing out of one cell can suffocate a neighboring cell, spelling its demise. In order for thunderstorms to coexist in close proximity, they must organize. The most efficient way of doing so is through orienting themselves around a common center, with individual cells’ updrafts and downdrafts working in tandem.

When a center forms, a broken band of thunderstorms begins to materialize around it. Warm, moist air rises within those storms, most rapidly as one approaches the broader system’s low-level center. That causes atmospheric pressure to drop, since air is being evacuated and mass removed. From there, the system begins to breathe.

Air moves from high pressure to low pressure. That vacuums air inward toward the center. Because of the Coriolis force, a product of the Earth’s spin, parcels of air take a curved path into the fledgling cyclone’s center. That’s what causes the system to rotate.

Hurricanes spin counterclockwise in the Northern Hemisphere, and clockwise south of the equator. Though the hottest ocean waters in the world are found on the equator, a hurricane could never form there. That’s because the Coriolis force is zero on the equator; there’d be nothing to get a storm to twist.

As pockets of air from outside the nascent tropical cyclone spiral into the vortex, they expand as barometric pressure decreases. That releases heat into the atmosphere, causing clouds and rain. Ordinarily that would result in a drop in temperature of an air parcel, but because it’s in contact with toasty ocean waters, it maintains a constant temperature; it’s heated at the same rate that it’s losing temperature to its surroundings. As long as a storm is over the open water and sea surface temperatures are sufficiently mild, it can continue to extract oceanic heat content.

Rainfall rates within tropical cyclones can exceed four inches per hour thanks to high precipitation efficiency. Because the entire atmospheric column is saturated, there’s little evaporation to eat away at a raindrop on the way down. As a result, inland freshwater flooding is the number one source of fatalities from tropical cyclones.

The strongest winds are found toward the middle of a tropical storm or hurricane in the eyewall. The greatest pressure gradient, or change of air pressure with distance, is located there. The sharper the gradient, the stronger the winds. That’s because air is rushing down the gradient. Think about skiing — you’ll ski faster if there’s a steeper slope.

When maximum sustained winds surpass 39 mph, the system is designated a tropical storm. Only once winds cross 74 mph is it designated a hurricane. Major hurricanes have winds of 111 mph or greater and correspond to Category 3 strength. A Category 5 contains extreme winds topping 157 mph.

Since the winds are derived from air rushing in to fill a void, or deficit of air, the fiercest hurricanes are usually those with the lowest air pressures. The most punishing hurricanes and typhoons may have a minimum central barometric pressure about 90 percent of ambient air pressure outside the storm. That means 10 percent of the atmosphere’s mass is missing.

Picture stirring your cup of coffee with a teaspoon. You know that dip in the middle of the whirlpool? The deeper the dip, or fluid deficit, the faster the fluid must be spinning. Hurricanes are the same. But what prevents that dip from filling in? Hurricane eyewalls are in cyclostrophic balance.

That means a perfect stasis of forces makes it virtually impossible to “fill in” a storm in steady state. Because of their narrow radius of curvature, parcels of air swirling around the eye experience an incredible outward-directed centrifugal force that exactly equals the inward tug of the pressure gradient force. That leaves them to trace continuous circles.

If you’ve ever experienced a change in altitude, such as flying on an airplane, or even traveling to the top of a skyscraper, you probably noticed your ears popping. That’s because they were adjusting to the drop in air pressure with height. Now imagine all the air below that height vanished. That’s the equivalent air pressure in the eye a major hurricane. The disparity in air pressure is why a hurricane is, in the words of Buddy the Elf, “sucky. Very sucky.”

Sometimes hurricanes undergo eyewall replacement cycles, which entail an eyewall shriveling and crumbling into the eye while a new eyewall forms around it and contracts, taking the place of its predecessor. This usually results in a dual wind maximum near the storm’s center as well as a brief plateau in intensification.

In addition to the scouring winds found inside the eyewall, tornadoes, tornado-scale vortices, mini swirls, and other poorly understood small-scale wind phenomena can whip around the eye and result in strips of extreme damage. A mini swirl may be only a couple yards wide, but a 70 mph whirlwind moving in a background wind of 100 mph can result in a narrow path of 170 mph demolition. Their existence was first hypothesized following the passage of Category 5 Hurricane Andrew through south Florida in 1992, and modern-day efforts to study hurricane eyewalls using mobile Doppler radar units have shed light on their existence. Within a hurricane’s eye, air sinks and warms, drying out and creating a dearth of cloud cover. It’s not uncommon to see clearing skies or even sunshine. The air is hot and still, an oasis of peace enveloped in a hoop of hell.

There’s such a discontinuity between the raucous winds of the eyewall and deathly stillness of the eye that the atmosphere struggles to transition. The eyes of hurricanes are often filled with mesovortices, or smaller eddies a few miles across, that help flux and dissipate angular momentum into the eye. Sometimes four or five mesovortices can cram into the eye, contorting the eyewall into a clover-like shape. That makes for a period of extraordinary whiplash on the inner edge of the eyewall as alternating clefts of calamitous wind and calm punctuate the eye’s arrival.

Google is taking reservations to talk to its supposedly-sentient chatbot

At the I/O 2022 conference this past May, Google CEO Sundar Pichai announced that the company would, in the coming months, gradually avail its experimental LaMDA 2 conversational AI model to select beta users. Those months have come. On Thursday, researchers at Google's AI division announced that interested users can register to explore the model as access increasingly becomes available.

Regular readers will recognize LaMDA as the supposedly sentient natural language processing (NLP) model that a Google researcher got himself fired over. NLPs are a class of AI model designed to parse human speech into actionable commands and are behind the functionality of digital assistants and chatbots like Siri or Alexa, as well as do the heavy lifting for realtime translation and subtitle apps. Basically, whenever you're talking to a computer, it's using NLP tech to listen.   

"I'm sorry, I didn't quite get that" is a phrase that still haunts many early Siri adopters' dreams, though in the past decade NLP technology has advanced at a rapid pace. Today's models are trained on hundreds of billions of parameters, can translate hundreds of languages in real time and even carry lessons learned in one conversation through to subsequent chats.   

Google's AI Test kitchen will enable beta users to experiment and explore interactions with the NLP in a controlled, presumably supervised, sandbox. Access will begin rolling out to small groups of US Android users today before spreading to iOS devices in the coming weeks. The program will offer a set of guided demos which will show users LaMDA's capabilities. 

"The first demo, 'Imagine It,' lets you name a place and offers paths to explore your imagination," Tris Warkentin, Group Product Manager at Google Research, and Josh Woodward, Senior Director of Product Management for Labs at Google, wrote in a Google AI blog Thursday. "With the 'List It' demo, you can share a goal or topic, and LaMDA will break it down into a list of helpful subtasks. And in the 'Talk About It (Dogs Edition)' demo, you can have a fun, open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic."  

The focus on safe, responsible interactions is a common one in an industry where there's already a name for chatbot AIs that go full-Nazi, and that name in Taye. Thankfully, that exceedingly embarrassing incident was a lesson that Microsoft and much of the rest of the AI field has taken to heart, which is why we see such strident restrictions on what users can have Midjourney or Dall-E 2 conjure, or what topics Facebook's Blenderbot 3 can discuss. 

That's not to say the system is foolproof. "We’ve run dedicated rounds of adversarial testing to find additional flaws in the model," Warkentin and Woodward wrote. "We enlisted expert red teaming members... who have uncovered additional harmful, yet subtle, outputs." Those include failing "to produce a response when they’re used because it has difficulty differentiating between benign and adversarial prompts," and producing "harmful or toxic responses based on biases in its training data." As many AIs these days are wont to do.

After 25 years, we still don't see bicycle kicks at the RoboCup

This year’s RoboCup symposium held in Bangkok, Thailand marks the 25th anniversary of the event, an international competition dedicated to the advancement of robotic and artificial intelligence technologies. The original goal of the event was to get the state of robotics in robust enough shape that one might field a team of robotic soccer players capable of beating a World Cup champion (human) team by 2050 – but a lot has changed since 1997.

RoboCup

Both the event and its mechanical entrants have evolved by leaps and bounds in the intervening years. The number of teams participating has ballooned tenfold since the inaugural event, from 38 to more than 300, with competitors now coming from more than 40 nations worldwide. And rather than fall down stairs, today’s cutting-edge humanoid constructs are backflipping off them.

“We think of [the competition] as a grand challenge akin to the Apollo missions that sought to land a person on the moon,” Dr. Peter Stone, Professor of Computer Science at the University of Texas at Austin and Executive Director of Sony AI America, told Engadget via email. “In both cases, one might reasonably ask, why is it worthwhile to try to achieve such a goal? What do we gain by landing a person on the moon? What do we gain by creating superhuman soccer playing robots?”

“In the case of the Apollo mission, there were several spin-off technologies in areas such as remote telemetry, body monitoring, breathing apparatuses, fabric structures, communications, and food packaging,” he continued. ”In the case of RoboCup, there have been several start-up companies founded by RoboCup participants using RoboCup technologies, most prominently Kiva Systems which became Amazon Robotics.”

“This vision inspired my early research on AI planning and machine learning in multiagent systems,” Stone wrote in a 2021 Sony AI blog, “and has continued to inspire my research and that of my students on these areas and robot learning throughout the years.”

The ideas that led to the RoboCup — can you use a soccer competition to promote robotics and AI research — had been percolating in the academic space since the early 1990s, according to RoboCup, though it wasn't until 1995’s International Joint Conference on Artificial Intelligence that the official groundwork for a RoboCup competition was laid. Following a requisite two-year gap for teams to sort funding and training, Nagoya, Japan hosted the first event with more than 5,000 spectators in attendance.

Today, teams can compete in both simulated and physical soccer matches using an array of humanoid robots — sorted into divisions by size, capability, and pedalness — as well as pit their robotic first responders against the Cup’s hazard-strewn disaster courses, best one another at robo-buttling in the @Home competition, and devise the most efficient warehouse floor operation in Industry. There’s even a dedicated league for junior roboticists that spans the fields of soccer, search and rescue, and on-stage performance.

“One of the most important scientific contributions of RoboCup has been to demonstrate how competitions can drive research and also provide a way of objectively benchmarking different technologies,” Dr. Claude Sammut, Head of the Artificial Intelligence Research Group at the University of New South Wales and Deputy Director of the iCinema Centre for Interactive Cinema Research, told Engadget.

Sammut notes RoboCup Rescue as one valuable benchmarking example. The competition is supported, in part, by the US National Institute for Standard and Technology (NIST). “The arena uses the test methods developed by NIST to measure the performance of robots for disaster recovery and ordnance disposal. Each year, the test methods are updated to reflect real-world experience, so teams are encouraged to extend the capabilities of their robots to handle increasingly complex tasks.”

Training robots to play soccer “is a great problem to work on because it needs progress across most areas of AI and robotics (and it’s fun and motivating for students),” Sammut said, but learning that game won’t teach robots all they need to know about navigating in the wider world. The Cup’s Rescue course requires the robot to overcome unknown terrain to extract victims while @Home demands robust human-robot interaction and planning skills. “Humans working with robots is an important goal, so introducing domestic service robots pushes us in that direction,” he said.

That skill development has kept apace with the field’s steady stream of hardware advancements. Sammut points to “the massive increases in performance of low cost, low power CPUs and GPUs” to handle a greater degree of processing onboard, as well as the precipitous price drop of sensory equipment. “The first depth camera we bought for our rescue robots in 2006 cost €10,000. Now you can buy much better ones for a few hundred dollars and your iPhone and iPad have them built in.”

That said, even with a quarter century of technological advancements, today’s RoboCup competitors more closely resemble old Asimo than they do Sonny. Matches aren’t so much fast moving spectacles of mechanical might and sport prowess as they are watching a pair of toddering automatons shuffle after a ball while their developers trail behind them, ready to intervene in the event of a misstep or stumble.

“Motors and batteries have improved somewhat but they need further development to be able to get better speed, agility and lifetime,” Sammut conceded. “Soft, light but strong materials would also make the robots safer to be around. I wouldn't want to be on the field with the current large humanoid robots because a tackle would really hurt!”