Google Maps is getting an "Immersive View" that will offer users digitally rendered looks at major US cityscapes, Alphabet CEO Sundar Pichai told the audience at Google I/O developer conference on Wednesday.
The new feature uses computer vision and AI to blend Maps' existing Street View function with aerial photography to create high-resolution models of the various buildings and urban features of a given location. "With our new immersive view, you’ll be able to experience what a neighborhood, landmark, restaurant or popular venue is like — and even feel like you’re right there before you ever set foot inside," wrote Miriam Daniel, VP of Google Maps, on Wednesday. What's more, Maps' other tools and features can be applied to the view as well, enabling users to see what the area looks like at different times of the day and varying weather conditions.
Immersive View will first be available for Los Angeles, London, New York, San Francisco and Tokyo later this year, with more cities to follow. The company also notes that its recently released eco-routing feature, which lets drivers in the US and Canada to pick the most fuel efficient route for their trip, has already been used to travel 86 billion miles, and prevented the release of roughly half a million metric tons of carbon emissions.
After nearly a decade in development, the second iteration of the Linac Coherent Light Source (LCLS) at the DoE's Stanford Linear Accelerator Center (SLAC) is nearly ready to start throwing photons harder than ever before. Dubbed the LCLS-II, this billion-dollar superconducting particle accelerator upgrade will produce X-rays 10,000 times brighter than those of its predecessor at a world record rate of 1 million pulses per second — all while working at a frosty negative 456 degrees Fahrenheit.
"In just a few hours, LCLS-II will produce more X-ray pulses than the current laser has generated in its entire lifetime," Mike Dunne, director of LCLS, said. "Data that once might have taken months to collect could be produced in minutes. It will take X-ray science to the next level, paving the way for a whole new range of studies and advancing our ability to develop revolutionary technologies to address some of the most profound challenges facing our society."
The original LCLS came online in 2009, shining a billion times brighter than the accelerator it replaced, but was limited to 120 pulses per second because the laws of physics limit the number of electrons that could be pushed simultaneously through the accelerator's labyrinth of room-temperature copper pipes. But by replacing those pipes with more than three dozen cryogenic accelerator modules — interconnected strings of hollow niobium — cooled down to 2 Kelvin (4 degrees F above absolute zero), SLAC researchers can massively improve the accelerator's output.
"To reach this temperature, the linac is equipped with two world-class helium cryoplants, making SLAC one of the significant cryogenic landmarks in the U.S. and on the globe," Eric Fauve, director of the Cryogenic Division at SLAC, said. "The SLAC Cryogenics team has worked on site throughout the pandemic to install and commission the cryogenic system and cool down the accelerator in record time."
Once the electrons have passed through all 37 cryo modules and been sufficiently cooled, they're energized and accelerated by a megawatt microwave to nearly the speed of light and fed through a string of undulator magnets that force the electron beam into a zig-zag pattern, generating X-rays. What's more, the undulators can influence the type of X-ray that's produced — either hard X-rays for material imaging, or soft X-rays primarily used to document energy flows and real-time chemical reactions.
The LCLS-II first hit the 2 Kelvin mark in mid-April and with Tuesday's announcement is now ready to begin conducting research. That's expected to happen later this year and could help us examine cutting-edge materials and biological processes in greater resolution than ever before, advance the state of the art in clean energy technology and even unlock the secrets of the quantum realm by imaging individual atoms.
Forty years after it first began to dabble in quantum computing, IBM is ready to expand the technology out of the lab and into more practical applications — like supercomputing! The company has already hit a number of development milestones since it released its previous quantum roadmap in 2020, including the 127-qubit Eagle processor that uses quantum circuits and the Qiskit Runtime API. IBM announced on Wednesday that it plans to further scale its quantum ambitions and has revised the 2020 roadmap with an even loftier goal of operating a 4,000-qubit system by 2025.
Before it sets about building the biggest quantum computer to date, IBM plans release its 433-qubit Osprey chip later this year and migrate the Qiskit Runtime to the cloud in 2023, “bringing a serverless approach into the core quantum software stack,” per Wednesday’s release. Those products will be followed later that year by Condor, a quantum chip IBM is billing as “the world’s first universal quantum processor with over 1,000 qubits.”
This rapid four-fold jump in quantum volume (the number of qubits packed into a processor) will enable users to run increasingly longer quantum circuits, while increasing the processing speed — measured in CLOPS (circuit layer operations per second) — from a maximum of 2,900 OPS to over 10,000. Then it’s just a simple matter of quadrupaling that capacity in the span of less than 24 months.
To do so, IBM plans to first get sets of multiple processors to communicate with one another both in parallel and in series. This should help develop better error mitigation schemes and improve coordination between processors, both necessary components of tomorrow’s practical quantum computers. After that, IBM will design and deploy chip-level couplers, which “will closely connect multiple chips together to effectively form a single and larger processor,” according to the company, then build quantum communication links to connect those larger multi-processors together into even bigger clusters — essentially daisy-chaining increasingly larger clumps of processors together until they form a functional, modular 4,000-qubit computing platform.
“As quantum computing matures, we’re starting to see ourselves as more than quantum hardware,” IBM researcher Jay Gambetta wrote on Wednesday. “We’re building the next generation of computing. In order to benefit from our world-leading hardware, we need to develop the software and infrastructure capable of taking advantage of it.”
As such, IBM released a set of ready-made primitive programs earlier this year, “pre-built programs that allows developers easy access to the outputs of quantum computations without requiring intricate understanding of the hardware,” per the company. IBM intends to expand that program set in 2023, enabling developers to run them on parallelized quantum processors. “We also plan to enhance primitive performance with low-level compilation and post-processing methods, like introducing error suppression and mitigation tools,” Gambetta said. “These advanced primitives will allow algorithm developers to use Qiskit Runtime services as an API for incorporating quantum circuits and classical routines to build quantum workflows.”
These workflows will take a given problem, break it down into smaller quantum and classical programs, chew through those processes in either parallel or series depending on which is more efficient, and then use an orchestration layer to “circuit stitch” all those various data streams back into a coherent result that classical computers can understand. IBM calls its proprietary stitching infrastructure Quantum Serverless and, per the new roadmap, will deploy the feature to its core quantum software stack in 2023.
“We think by next year, we’ll begin prototyping quantum software applications for users hoping to use Qiskit Runtime and Quantum Serverless to address specific use cases,” Gambetta said. We’ll begin to define these services with our first test case — machine learning — working with partners to accelerate the path toward useful quantum software applications. By 2025, we think model developers will be able to explore quantum applications in machine learning, optimization, finance, natural sciences, and beyond.”
“For many years, CPU-centric supercomputers were society’s processing workhorse, with IBM serving as a key developer of these systems,” he continued. “In the last few years, we’ve seen the emergence of AI-centric supercomputers, where CPUs and GPUs work together in giant systems to tackle AI-heavy workloads. Now, IBM is ushering in the age of the quantum-centric supercomputer, where quantum resources — QPUs — will be woven together with CPUs and GPUs into a compute fabric. We think that the quantum-centric supercomputer will serve as an essential technology for those solving the toughest problems, those doing the most ground-breaking research, and those developing the most cutting-edge technology.”
Together, these hardware and software systems will become IBM Quantum System Two with the first prototype scheduled to be operational at some point next year.
Notorious facial recognition company Clearview AI has agreed to permanently halt sales of its massive biometric database to all private companies and individuals in the United States as part of a legal settlement with the American Civil Liberties Union, per court records.
Monday's announcement marks the close of a two-year legal dispute brought by the ACLU and privacy advocate groups against the company over allegations that it had violated Illinois data laws. Additionally, Clearview will not offer any of its services to Illinois local and state law enforcement agencies for the next five years, though Federal agencies and state departments outside of Illinois will be unaffected. The settlement must still be approved by a federal judge before it takes effect.
“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project, said in Monday's statement. “Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”
Today's technology landscape is dominated by a small cadre of massive corporations with the likes of Meta, Amazon and Google snapping up fledgling startups before they can grow into potential competitors, ignoring labor laws that don't suit their immediate needs, and generally operating like the dystopian corpro-villains Johnny Mnemonic warned us about. Traditionally, state regulation has acted as a gentle brake against American industries' more problematic tendencies, however the speed at which modern computing and communications technologies advance has overwhelmed the government's capacity to, well, govern them.
In their new book, Access Rules: Freeing Data from Big Tech for a Better Future, Viktor Mayer-Schönberger, Professor of Internet Governance and Regulation at Oxford, and Thomas Ramge, author of Who's Afraid of AI?, argue passionately against the data-hoarding practices of today's biggest tech companies and call for a more open, equitable means of accessing the information that these companies have amassed. One such method, explored in the excerpt below, involves addressing Big Tech's monopoly power directly, as the Biden administration has in recent years, though the efforts have not been particularly effective.
Early into his term, President Biden appointed Tim Wu, who had argued in favor of breaking up Facebook and written popular books on the dangers of Big Tech market concentration, to the National Economic Council as a special assistant to the president for technology and competition policy. Putting one of the most outspoken advocates of Big Tech trustbusting into a top advisory role is a powerful signal the Biden administration is taking a far more confrontational course.
Wu isn’t alone. His appointment was followed by the choice of Lina Khan for chair of the Federal Trade Commission (FTC). Khan’s youth — she was in her early 30s when nominated — belies her intellectual power and political credentials. A professor at Columbia Law School like Wu, Khan had authored influential papers on the need to fight Big Tech’s unchecked power. And she had explained why existing antitrust law was ill equipped to deal with Silicon Valley platform providers. But Khan isn’t just a Big Tech critic; she also offered a radical solution: regulate Big Tech companies as utilities, much like electricity providers or the venerable AT&T before telecom deregulation. With Khan at the FTC and Wu as advisor having the ear of the president, Big Tech could be in serious trouble.
Not just antitrust experts serving in government like Tim Wu and Lina Khan fear that the monopolistic structure of American tech dominance could turn into its Achilles heel. Think tanks and advocacy groups on both left and right have been joining the critics. Disruptive entrepreneurs and venture capitalists such as Elon Musk and Peter Thiel regard the well-rehearsed dance of Big Tech and venture capital with increasing skepticism, concerned that the intricate choreography is thwarting the next generation of disruptive founders and technologies. Taken together these voices are calling on and supporting regulators and legislators to prevent the most obvious cases of large companies removing potential competitors from the market by acquiring them—cases comparable to Facebook’s takeover of Instagram or Google’s acquisition of Waze. And they call on venture capitalists to take on the role for which Joseph Schumpeter originally conceived this class of investment capital, the role that the venture capitalists on Sand Hill Road in Menlo Park fulfilled up to the first decade of this century: financially support the bringing to market of new, radically better ideas and then enable them to be scaled up.
The antitrust tide is rising in the United States. And yet it’s questionable that well-intentioned activist regulators bolstered by broad public support will succeed. The challenge is a combination of the structural and the political. As Lina Khan herself argued, existing antitrust laws are less than useful. Big Tech may not have violated them sufficiently to warrant breaking them up. And other powerful measures, such as declaring them utilities, require legislative action. Given the delicate power balance in Congress and hyper-partisan politics, it’s likely that such bold legislative proposals would not get enough votes to become enacted. The political factions may agree on the problem, but they are far apart on the solution. The left wants an effective remedy, while the right insists on the importance of market forces and worries about antitrust action micromanaging economic activity. That leaves a fairly narrow corridor of acceptable incremental legislative steps, such as “post-acquisition lockups.” This may be politically palatable, but insufficient to achieve real and sustained success.
The truth is that the current game based on exit strategies works only too well for everyone involved, at least in the short term. The monopolists continue to increase their rents. Entrepreneurs get rich quickly. Venture capitalists reduce risk by optimizing their investments for exiting through a sale. And government? It too earns money on every “Goliath buying David” transaction. Preventing such transactions causes annoyance for everyone involved. Any politician mounting a serious attack on Big Tech USA exposes themselves to the charge of endangering the great successes of American technology companies on global markets—a charge few politicians could fend off.
Despite renewed resolve by the Biden administration to get serious against Big Tech overreach, substantial change still seems elusive in the United States. In contrast, European antitrust authorities have been far more active. The billion-dollar fines lobbed at US Big Tech by Commissioner Vestager’s team surely sound impressive. But, as we mentioned, most of them were reduced on appeal to an amount that the superstar companies with huge cash reserves and skyrocketing profits could easily afford. The European Parliament may not suffer from hyper-partisanship and be willing to strengthen antitrust rules, but their effectiveness is limited by the very fact that almost all Big Tech is not European. At best, Europeans might prevent US Big Tech from buying up innovative European start-ups; the necessary laws for this are increasingly being enacted. But that will do little to break Big Tech’s information power.
The challenge faced by European regulators is shared by regulators around the globe, from the Asian Tigers to the Global South: how can national regulators effectively counter the information might amassed by Silicon Valley superstars? Sure, one could prohibit US Big Tech from operating. But that would deprive the local economy of valuable services. For most nations, such binary disengagement is not an option. And for nations that to an extent can and have disengaged, such as China, their homegrown Big Tech companies confront them with similar problems. The huge fines levied on Alibaba in 2021 surely are surprising for outside observers, but they, too, are targeting symptoms, not the root cause of Big Tech’s power.
Sooner or later, regulators and legislators will have to confront the real problem of reining in Big Tech: whether we look at Draconian measures like breakups or incremental ones like fines and acquisition lockups, these target the symptoms of Big Tech’s information power, but do little to undo the structural advantages the digital superstars possess. It’s little more than cutting a head off Hydra, only to see a new one grow.
To tackle the structural advantage, we have to remember Schumpeter. Schumpeter’s nightmare was that the capacity for innovation would become concentrated within a few large companies. This would lead to a downward spiral of innovation, as major players have less incentive to be disruptive and far more reason to enjoy market power. Contrary to Schumpeter’s fear, this concentration process didn’t occur after World War II, mainly because entrepreneurs had access to abundant capital and could thrive on disruptive ideas. They stood a real chance against the large incumbents of their time, a role more than a few of them took on themselves. But money is no longer the scarce resource limiting innovation. What’s scarce today is access to data. More precisely, such a scarcity is being artificially created.
In the data economy, we’re observing a concentration dynamic driven by narrowing access to the key resource for innovation and accelerated by AI. The dynamic therefore turns on access to data as a raw material. Economic policy to counteract market concentration and a weakening of competition must focus on this structural lever.
If we want to avert Schumpeter’s nightmare, preserve the competitiveness of our economy, and strengthen its capacity for innovation, we have to drastically widen access to data — for entrepreneurs and start-ups and for all players who can’t translate their ideas into innovations without data access. Today, they can only hope to enter the kill zone and be bought up by one of the digital giants. If data flows more freely through broader access, the incentive to use data and gain innovative insights from it increases. We’d turbocharge our economy’s capacity for innovation in a way not seen since the first wave of Internet companies. We would also learn more about the world, make better decisions, and distribute data dividends more broadly.
In late March, Polestar announced that the single-motor Long Range variant of its Polestar 2 EV coupe would be arriving imminently upon US shores and be available starting at $45,900 — $33,400 after federal and state incentives — while its dual-motor sibling would start at $51,200. On Wednesday, the EV automaker announced that those prices would be going up. The single-motor variant will now start at $48,400 — $40,900 after the $7,500 federal tax credit — while the dual-motor AWD version will set buyers back $51,900 ($44,400 after the credit).
The price hike is due in part to the new standard features, updates and upgrades applied to the platform over the past 6 weeks, according to a Polestar spokesperson. For those extra few hundreds to thousands of dollars, PS2 buyers will have access not only to the hundred-plus OTA software updates that have already been released — including the one that boosts the dual-motor's driving range to a respectable 260 miles — but the new high-efficiency heat pump announced in April and a more sensitive air quality sensor as well. That air quality sensor is part of the $4,200 Plus Pack and can show the driver "a breakdown of the air circulating outside of the vehicle, including pollen types," according to the release. Similarly, ordering the Performance Pack (a mere $5,500) will include the recent software upgrade that squeezes an extra 68 HP and 15 lb-ft of torque out of the dual motors.
PS2 shoppers will also have their pick of two new exterior color options — a metallic shade called "Jupiter" and the same metallic black "Space" found on the PS1 — and a light grey "Zinc" option for the interior Nappa leather. Both the 19- and 20-inch rims designs have been updated too.
After decades on the decline intro, America's labor movement is undergoing a massive renaissance with Starbucks, Amazon and Apple Store employees leading the way. Though the tech sector has only just begun basking in the newfound glow of collective bargaining rights, the automotive industry has a long been a hotbed for unionization. But the movement is not at all monolithic. In the excerpt below from her new book, Fight Like Hell: The Untold History of American Labor, journalist Kim Kelly recalls the summer of 1968 that saw the emergence of a new, more vocal UAW faction, the Dodge Revolutionary Union Movement, coincide with a flurry of wildcat strikes in Big Three plants across the Rust Belt.
As of 2021, the U.S. construction industry is still booming and the building trades are heavily unionized, but not all of the nation’s builders have been so lucky. The country’s manufacturing sector has declined severely since its post–World War II high point, and so has its union density. The auto industry’s shuttered factories and former jobs shipped to countries with lower wages and weaker unions have become a symbol of the waning American empire. But things weren’t always this dire. Unions once fought tooth and nail to establish a foothold in the country’s automobile plants, factories, and steel mills. When those workers were able to harness the power of collective bargaining, wages went up and working conditions improved. The American Dream, or at least, a stable middle class existence, became an achievable goal for workers without college degrees or privileged backgrounds. Many more became financially secure enough to actually purchase the products they made, boosting the economy as well as their sense of pride in their work. Those jobs were still difficult and demanding and carried physical risks, but those workers—or at least, some of those workers—could count on the union to have their back when injustice or calamity befell them.
In Detroit, those toiling on the assembly lines of the Big Three automakers—Chrysler, Ford, and General Motors—could turn to the United Auto Workers (UAW), then hailed as perhaps the most progressive “major” union in the country as it forced its way into the automotive factories of the mid-twentieth century. The UAW stood out like a sore thumb among the country’s many more conservative (and lily-white) unions, with leadership from the likes of former socialist and advocate of industrial democracy Walter Reuther and a strong history of support for the Civil Rights Movement. But to be clear, there was still much work to be done; Black representation in UAW leadership remained scarce despite its membership reaching nearly 30 percent Black in the late 1960s.
The Big Three had hired a wave of Black workers to fill their empty assembly lines during World War II, often subjecting them to the dirtiest and most dangerous tasks available and on-the-job racial discrimination. And then, of course, once white soldiers returned home and a recession set in, those same workers were the first ones sacrificed. Production picked back up in the 1960s, and Black workers were hired in large numbers once again. They grew to become a majority of the workforce in Detroit’s auto plants, but found themselves confronting the same problems as before. In factories where the union and the company had become accustomed to dealing with one another without much fuss, a culture of complacency set in and some workers began to feel that the union was more interested in keeping peace with the bosses than in fighting for its most vulnerable members. Tensions were rising, both in the factories and the world at large. By May 1968, as the struggle for Black liberation consumed the country, the memory of the 1967 Detroit riots remained fresh, and the streets of Paris were paralyzed by general strikes, a cadre of class-conscious Black activists and autoworkers saw an opportunity to press the union into action.
They called themselves DRUM—the Dodge Revolutionary Union Movement. DRUM was founded in the wake of a wildcat strike at Dodge’s Detroit plant, staffed by a handful of Black revolutionaries from the Black-owned, anti-capitalist Inner City Voice alternative newspaper. The ICV sprang up during the 1967 Detroit riots, published with a focus on Marxist thought and the Black liberation struggle. DRUM members boasted experience with other prominent movement groups like the Student Nonviolent Coordinating Committee and the Black Panthers, combining tactical knowledge with a revolutionary zeal attuned to their time and community.
General Gordon Baker, a seasoned activist and assembly worker at Chrysler’s Dodge Main plant, started DRUM with a series of clandestine meetings throughout the first half of 1968. By May 2, the group had grown powerful enough to see four thousand workers walk out of Dodge Main in a wildcat strike to protest the “speed-up” conditions in the plant, which saw workers forced to produce dangerous speed and work overtime to meet impossible quotas. Over the course of just one week, the plant had increased its output 39 percent. Black workers, joined by a group of older Polish women who worked in the plant’s trim shop, shut down the plant for the day, and soon bore the brunt of management’s wrath. Of the seven workers who were fired after the strike, five were Black. Among them was Baker, who sent a searing letter to the company in response to his dismissal. “In this day and age under the brutal repression reaped from the backs of Black workers, the leadership of a wildcat strike is a badge of honor and courage,” he wrote. “You have made the decision to do battle, and that is the only decision you will make. We shall decide the arena and the time.”
DRUM led another thousands-strong wildcat strike on July 8, this time shutting down the plant for two days and drawing in a number of Arab and white workers as well. Prior to the strike, the group had printed leaflets and held rallies that attracted hundreds of workers, students, and community members, a strategy DRUM would go on to use liberally in later campaigns to gin up support and spread its revolutionary message.
Men like Baker, Kenneth Cockrel, and Mike Hamlin were the public face of DRUM, but their work would have been impossible without the work of their female comrades, whose contributions were often overlooked. Hamlin admitted as much in his book-length conversation with longtime political activist and artist Michele Gibbs, A Black Revolutionary’s Life in Labor. “Possibly my deepest regret,” Hamlin writes, “is that we could not curb, much less transform, the doggish behavior and chauvinist attitudes of many of the men.”
Black women in the movement persevered despite this discrimination and disrespect at work, and they also found allies in unexpected places. Grace Lee Boggs, a Chinese American Marxist philosopher and activist with a PhD from Bryn Mawr, met her future husband James Boggs in Detroit after moving there in 1953. She and James, a Black activist, author (1963’s The American Revolution: Pages from a Negro Worker’s Notebook), and Chrysler autoworker, became fixtures in Detroit’s Black radical circles. They naturally fell in with the DRUM cadre, and Grace fit perfectly when Hamlin organized a DRUM-sponsored book club discussion forum in order to draw in progressive white and more moderate Black sympathizers. Interest in the Marxist book club was unexpectedly robust, and it grew to more than eight hundred members in its first year. Grace stepped in to help lead its discussion groups, and allowed young activists to visit her and James at their apartment and talk through thorny philosophical and political questions until the wee hours. She would go on to become one of the nation’s most respected Marxist political intellectuals and a lifelong activist for workers’ rights, feminism, Black liberation, and Asian American issues. As she told an interviewer prior to her death in 2015 at the age of one hundred, “People who recognize that the world is always being created anew, and we’re the ones that have to do it — they make revolutions.”
Further inside the DRUM orbit, Helen Jones, a printer, was the force behind the creation and distribution of their leaflets and publications. Women like Paula Hankins, Rachel Bishop, and Edna Ewell Watson, a nurse and confidant of Marxist scholar and former Black Panther Angela Davis, undertook their own labor organizing projects. In one case, the trio led a union drive among local hospital workers in the DRUM faction, hoping to carve out a place for female leadership within their movement. But ultimately, these expansion plans were dropped due to a lack of full support within DRUM. “Many of the male leaders acted as if women were sexual commodities, mindless, emotionally unstable, or invisible,” Edna Watson later told Dan Georgakas and Marvin Surkin for their Detroit: I Do Mind Dying. She claimed the organization held a traditionalist Black patriarchal view of women, in which they were expected to center and support their male counterparts’ needs at the expense of their own agenda. “There was no lack of roles for women... as long as they accepted subordination and invisibility.”
By 1969, the movement had spread to multiple other plants in the city, birthing groups like ELRUM (Eldon Avenue RUM), JARUM (Jefferson Avenue RUM), and outliers like UPRUM (UPS workers) and HRUM (healthcare workers). The disparate RUM groups then combined forces, forming the League of Revolutionary Black Workers. The new organization was to be led by the principles of Marxism, Leninism, and Maoism, but the league was never an ideological monolith. Its seven-member executive committee could not fully cohere the different political tendencies of its board or its eighty-member deep inner control group. Most urgently, opinions diverged on what shape, if any, further growth should take.
The embattled game company announced via Twitter on Thursday that it will host a livestream premiere event Tuesday, May 3rd at 10 am Pacific on Reveal.Blizzard.com. This isn't the first time that a console franchise has expanded into mobile — Call of Duty and Fortnite have already launched their own iterations for phones and tablets. There are precious few details as to what the game will entail (beyond being set in the Warcraft Universe) or what gameplay mechanics will be used so be sure to join us next Tuesday for more coverage of WoW's newest foray into the realm of handhelds.
At the start of the year, Google announced the Privacy Sandbox on Android project, a new system designed to eventually replace today's existing third-party cookie schemes and reinvent a more privacy-centered method for serving advertisements. After an initial round of alpha testing and feedback, Google announced on Thursday that the first developer's preview of the sandbox is now available as part of Android 13 beta 1.
The Privacy Sandbox is a multi-year development effort that will "limit sharing of user data with third parties and operate without cross-app identifiers, including advertising ID," Google wrote in a February announcement. "We’re also exploring technologies that reduce the potential for covert data collection, including safer ways for apps to integrate with advertising SDKs."
This preview provides developers with early looks at the sandbox's SDK Runtime and Topics API so that they can better understand how they'll fit into their apps and processes once it is officially released. We first saw Topics API back in January. It pulls data from the Chrome browser to identify the user's top five interests for the week, based on their search and browsing history. Those topics are then compared against a database of topics from the Interactive Advertising Bureau and Google's own data. Partner publishers can then ping the Topics API, see what the user is currently into, and then serve the most appropriate ads without having to know every nitty-gritty detail about their potential customer.
Developers will also have access to an early version of the Fledge API. This allows sites to run "remarket" to existing users — ie, serving users ads to remind them that they left items in their shopping cart and should just check out already. The Sandbox comes with everything that developers will need to test it, including the Android SDK and 64-bit Android Emulator. The company intends to further refine the toolset over the coming months and welcomes feedback and questions from the developer community
They may not be able to shout "Eureka!" like their human colleagues but AI/ML system have shown immense potential in the field of compound discovery — whether that's sifting through reams of data to find new therapeutic compounds or imagining new recipes using the ingredients' flavor profiles. Now a team from Meta AI, working with researchers at the University of Illinois, Urbana-Champaign, have created an AI that can devise and refine formulas for increasingly high-strength, low-carbon concrete.
Traditional methods for creating concrete, of which we produce billions of tons every year, are far from ecologically friendly. In fact, they generate an estimated 8 percent of the annual global carbon dioxide emission total. Advances have been made in recent years to reduce the concrete industry's carbon footprint (as well as in make the material more rugged, more resilient and even capable of charging EVs) but overall its production remains among the most carbon intensive in modern construction.
Reducing the amount of carbon that goes into concrete could be as simple as changing the ingredients that go into concrete. The material is made from four basic components: cement, aggregate, water and admixture (which act as doping agents). Cement is far and away the most carbon-intensive ingredient of the four so research has been made into reducing the amount of cement needed by supplementing it with lower-carbon materials like fly ash, slag, or ground glass.
Similarly, aggregate materials like gravel, crushed stone, sand might be replaced with recycled concrete. The problem is that there are dozens of potential ingredient materials that could be used and the ratio of their amounts all interact to influence the structural profile of the resulting concrete. In short, there are a whole slew of possible combinations for researchers to test, select, and refine; and working through those myriad options sequentially, at human speed, is going to take forever. So the Meta folks trained an AI to do it, much faster.
Working with Prof. Lav Varshney, electrical and computer engineering department, and Prof. Nishant Garg, civil engineering department, both of the University of Illinois at Urbana-Champaign, the team first trained the model using the Concrete Compressive Strength data set. This set includes more than 1,000 concrete formulas as well as their structural attributes, including seven-day and 28-day compressive strength data. The team determined the resulting concrete mixture's carbon footprint using the Cement Sustainability Initiative's Environmental Product Declaration (EPD) tool.
Of the generated list of potential formulas, the research team then selected the five most promising options and iteratively refined them until they met or exceeded the 7- and 28-day strength metrics while dropping carbon requirements by at least 40 percent. The refinement process took mere weeks and ended up generating a concrete formula that exceeded all of those requirements while replacing as much as 50 percent of the required cement with fly ash and slag. Meta then teamed with concrete company Ozinga, the folks who recently built Meta's newest datacenter in Illinois, to further refine the formula and conduct real world testing.
Looking ahead, the Meta team hopes to further improve the formula's 3- and 5-day strength profiles (basically ensuring it dries faster so the rest of the construction can move ahead sooner) and get a better understanding of how it cures under varying weather conditions like wind or high humidity.