Posts with «author_name|andrew tarantola» label

How OpenAI's ChatGPT has changed the world in just a year

Over the course of two months from its debut in November 2022, ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the Internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, and been shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government.

In short ChatGPT is just about everywhere. Few industries have remained untouched by the viral adoption of the generative AI’s tools. On the first anniversary of its release, let’s take a look back on the year of ChatGPT that brought us here.

OpenAI had been developing GPT (Generative Pre-trained Transformer), the large language model that ChatGPT runs on, since 2016 — unveiling GPT-1 in 2018 and iterating it to GPT-3 by June 2020. With the November 30, 2022 release of GPT-3.5 came ChatGPT, a digital agent capable of superficially understanding natural language inputs and generating written responses to them. Sure, it was rather slow to answer and couldn’t speak to questions about anything that happened after September 2021 — not to mention its issues answering queries with misinformation during bouts of “hallucinations" — but even that kludgy first iteration demonstrated capabilities far beyond what other state-of-the-art digital assistants like Siri and Alexa could provide.

ChatGPT’s release timing couldn’t have been better. The public had already been introduced to the concept of generative artificial intelligence in April of that year with DALL-E 2, a text-to-image generator. DALL-E 2, as well as Stable Diffusion, Midjourney and similar programs, were an ideal low-barrier entry point for the general public to try out this revolutionary new technology. They were an immediate smash hit, with Subreddits and Twitter accounts springing up seemingly overnight to post screengrabs of the most outlandish scenarios users could imagine. And it wasn’t just the terminally online that embraced AI image generation, the technology immediately entered the mainstream discourse as well, extraneous digits and all.

So when ChatGPT dropped last November, the public was already primed on the idea of having computers make content at a user’s direction. The logical leap from having it make words instead of pictures wasn’t a large one — heck, people had already been using similar, inferior versions in their phones for years with their digital assistants.

Q1: [Hyping intensifies]

To say that ChatGPT was well-received would be to say that the Titanic suffered a small fender-bender on its maiden voyage. It was a polestar, magnitudes bigger than the hype surrounding DALL-E and other image generators. People flat out lost their minds over the new AI and its CEO, Sam Altman. Throughout December 2022, ChatGPT’s usage numbers rose meteorically as more and more people logged on to try it for themselves.

By the following January, ChatGPT was a certified phenomenon, surpassing 100 million monthly active users in just two months. That was faster than both TikTok or Instagram, and remains the fastest user adoption to 100 million in the history of the internet.

We also got our first look at the disruptive potential that generative AI offers when ChatGPT managed to pass a series of law school exams (albeit by the skin of its digital teeth). Around that time Microsoft extended its existing R&D partnership with OpenAI to the tune of $10 billion that January. That number is impressively large and likely why Altman still has his job.

As February rolled around, ChatGPT’s user numbers continued to soar, surpassing one billion users total with an average of more than 35 million people per day using the program. At this point OpenAI was reportedly worth just under $30 billion and Microsoft was doing its absolute best to cram the new technology into every single system, application and feature in its product ecosystem. ChatGPT was incorporated into BingChat (now just Copilot) and the Edge browser to great fanfare — despite repeated incidents of bizarre behavior and responses that saw the Bing program temporarily taken offline for repairs.

Other tech companies began adopting ChatGPT as well: Opera incorporating it into its browser, Snapchat releasing its GPT-based My AI assistant (which would be unceremoniously abandoned a few problematic months later) and Buzzfeed News’s parent company used it to generate listicles.

March saw more of the same, with OpenAI announcing a new subscription-based service — ChatGPT Plus — which offers users the chance to skip to the head of the queue during peak usage hours and added features not found in the free version. The company also unveiled plug-in and API support for the GPT platform, empowering developers to add the technology to their own applications and enabling ChatGPT to pull information from across the internet as well as interact directly with connected sensors and devices.

ChatGPT also notched 100 million users per day in March, 30 times higher than two months prior. Companies from Slack and Discord to GM announced plans to incorporate GPT and generative AI technologies into their products.

Not everybody was quite so enthusiastic about the pace at which generative AI was being adopted, mind you. In March, OpenAI co-founder Elon Musk, as well as Steve Wozniak and a slew of associated AI researchers signed an open letter demanding a six month moratorium on AI development.

Q2: Electric Boog-AI-loo

Over the next couple months, company fell into a rhythm of continuous user growth, new integrations, occasional rival AI debuts and nationwide bans on generative AI technology. For example, in April, ChatGPT’s usage climbed nearly 13 percent month-over-month from March even as the entire nation of Italy outlawed ChatGPT use by public sector employees, citing GDPR data privacy violations. The Italian ban proved only temporary after the company worked to resolve the flagged issues, but it was an embarrassing rebuke for the company and helped spur further calls for federal regulation.

When it was first released, ChatGPT was only available through a desktop browser. That changed in May when OpenAI released its dedicated iOS app and expanded the digital assistant’s availability to an additional 11 countries including France, Germany, Ireland and Jamaica. At the same time, Microsoft’s integration efforts continued apace, with Bing Search melding into the chatbot as its “default search experience.” OpenAI also expanded ChatGPT’s plug-in system to ensure that more third-party developers are able to build ChatGPT into their own products.

ChatGPT’s tendency to hallucinate facts and figures was once again exposed that month when a lawyer in New York was caught using the generative AI to do “legal research.” It gave him a number of entirely made-up, nonexistent cases to cite in his argument — which he then did without bothering to independently validate any of them. The judge was not amused.

By June, a little bit of ChatGPT’s shine had started to wear off. Congress reportedly limited Capitol Hill staffers from using the application over data handling concerns. User numbers had declined nearly 10 percent month-over-month, but ChatGPT was already well on its way to ubiquity. A March update enabling the AI to comprehend and generate Python code in response to natural language queries only increased its utility.

Q3: [Pushback intensifies]

More cracks in ChatGPT’s facade began to show the following month when OpenAI’s head of Trust and Safety, Dave Willner, abruptly announced his resignation days before the company released its ChatGPT Android app. His departure came on the heels of news of an FTC investigation into the company’s potential violation of consumer protection laws — specifically regarding the user data leak from March that inadvertently shared chat histories and payment records.

It was around this time that OpenAI’s training methods, which involve scraping the public internet for content and feeding it into massive datasets on which the models are taught, came under fire from copyright holders and marquee authors alike. Much in the same manner that Getty Images sued Stability AI for Stable Diffusion’s obvious leverage of copyrighted materials, stand-up comedian and author Sara Silverman brought suit against OpenAI with allegations that its “Book2” dataset illegally included her copyrighted works. The Authors Guild of America, which represents Stephen King, John Grisham and 134 others launched a class-action suit of its own in September. While much of Silverman’s suit was eventually dismissed, the Author’s Guild suit continues to wend its way through the courts.

Select news outlets, on the other hand, proved far more amenable. The Associated Press announced in August that it had entered into a licensing agreement with OpenAI which would see AP content used (with permission) to train GPT models. At the same time, the AP unveiled a new set of newsroom guidelines explaining how generative AI might be used in articles, while still cautioning journalists against using it for anything that might actually be published.

ChatGPT itself didn’t seem too inclined to follow the rules. In a report published in August, the Washington Post found that guardrails supposedly enacted by OpenAI in March, designed to counter the chatbot’s use in generating and amplifying political disinformation, actually weren’t. The company told Semafor in April that it was "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying." Per the Post, those rules simply were not enforced, with the system eagerly returning responses for prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

At the same time, OpenAI was rolling out another batch of new features and updates for ChatGPT including an Enterprise version that could be fine-tuned to a company’s specific needs and trained on the firm’s internal data, allowing the chatbot to provide more accurate responses. Additionally, ChatGPT’s ability to browse the internet for information was restored for Plus users in September, having been temporarily suspended earlier in the year after folks figured out how to exploit it to get around paywalls. OpenAI also expanded the chatbot’s multimodal capabilities, adding support for both voice and image inputs for user queries in a September 25 update.

Q4: Starring Sam Altman as “Lazarus”

The fourth quarter of 2023 has been a hell of a decade for OpenAI. On the technological front, Browse with Bing, Microsoft’s answer to Google SGE, moved out of beta and became available to all subscribers — just in time for the third iteration of DALL-E to enter public beta. Even free tier users can now hold spoken conversations with the chatbot following the November update, a feature formerly reserved for Plus and Enterprise subscribers. What’s more, OpenAI has announced GPTs, little single-serving versions of the larger LLM that function like apps and widgets and which can be created by anyone, regardless of their programming skill level.

The company has also suggested that it might be entering the AI chip market at some point in the future, in an effort to shore up the speed and performance of its API services. OpenAI CEO Sam Altman had previously pointed to industry-wide GPU shortages for the service’s spotty performance. Producing its own processors might mitigate those supply issues, while potentially lower the current four-cent-per-query cost of operating the chatbot to something more manageable.

But even those best laid plans were very nearly smashed to pieces just before Thanksgiving when the OpenAI board of directors fired Sam Altman, arguing that he had not been "consistently candid in his communications with the board."

That firing didn't take. Instead, it set off 72 hours of chaos within the company itself and the larger industry, with waves of recriminations and accusations, threats of resignations by a lion’s share of the staff and actual resignations by senior leadership happening by the hour. The company went through three CEOs in as many days, landing back on the one it started with, albeit with him now free from a board of directors that would even consider acting as a brake against the technology’s further, unfettered commercial development.

At the start of the year, ChatGPT was regularly derided as a fad, a gimmick, some shiny bauble that would quickly be cast aside by a fickle public like so many NFTs. Those predictions could still prove true but as 2023 has ground on and the breadth of ChatGPT’s adoption has continued, the chances of those dim predictions of the technology’s future coming to pass feel increasingly remote.

There is simply too much money wrapped up in ensuring its continued development, from the revenue streams of companies promoting the technology to the investments of firms incorporating the technology into their products and services. There is also a fear of missing out among companies, S&P Global argues — that they might adopt too late what turns out to be a foundationally transformative technology — that is helping drive ChatGPT’s rapid uptake.

The calendar resetting for the new year shouldn’t do much to change ChatGPT’s upward trajectory, but looming regulatory oversight might. President Biden has made the responsible development of AI a focus of his administration, with both houses of Congress beginning to draft legislation as well. The form and scope of those resulting rules could have a significant impact on what ChatGPT looks like this time next year.

This article originally appeared on Engadget at https://www.engadget.com/how-openais-chatgpt-has-changed-the-world-in-just-a-year-140050053.html?src=rss

Black hole behavior suggests Dr. Who's 'bigger on the inside' Tardis trick is theoretically possible

Do black holes, like dying old soldiers, simply fade away? Do they pop like hyperdimensional balloons? Maybe they do, or maybe they pass through a cosmic rubicon, effectively reversing their natures and becoming inverse anomalies that cannot be entered through their event horizons but which continuously expel energy and matter back into the universe. 

In his latest book, White Holes, physicist and philosopher Carlo Rovelli focuses his attention and considerable expertise on the mysterious space phenomena, diving past the event horizon to explore their theoretical inner workings and and posit what might be at the bottom of those infinitesimally tiny, infinitely fascinating gravitational points. In this week's Hitting the Books excerpt, Rovelli discusses a scientific schism splitting the astrophysics community as to where all of the information — which, from our current understanding of the rules of our universe, cannot be destroyed — goes once it is trapped within an inescapable black hole.   

Riverhead Books

Excerpted from by White Holes by Carlo Rovelli. Published by Riverhead Books. Copyright © 2023 by Carlo Rovelli. All rights reserved.


In 1974, Stephen Hawking made an unexpected theoretical discovery: black holes must emit heat. This, too, is a quantum tunnel effect, but a simpler one than the bounce of a Planck star: photons trapped inside the horizon escape thanks to the pass that quantum physics provides to everything. They “tunnel” beneath the horizon. 

So black holes emit heat, like a stove, and Hawking computed their temperature. Radiated heat carries away energy. As it loses energy, the black hole gradually loses mass (mass is energy), becoming ever lighter and smaller. Its horizon shrinks. In the jargon we say that the black hole “evaporates.” 

Heat emission is the most characteristic of the irreversible processes: the processes that occur in one time direction and cannot be reversed. A stove emits heat and warms a cold room. Have you ever seen the walls of a cold room emit heat and heat up a warm stove? When heat is produced, the process is irreversible. In fact, whenever the process is irreversible, heat is produced (or something analogous). Heat is the mark of irreversibility. Heat distinguishes past from future. 

There is therefore at least one clearly irreversible aspect to the life of a black hole: the gradual shrinking of its horizon.

But, careful: the shrinking of the horizon does not mean that the interior of the black hole becomes smaller. The interior largely remains what it is, and the interior volume keeps growing. It is only the horizon that shrinks. This is a subtle point that confuses many. Hawking radiation is a phenomenon that regards mainly the horizon, not the deep interior of the hole. Therefore, a very old black hole turns out to have a peculiar geometry: an enormous interior (that continues to grow) and a minuscule (because it has evaporated) horizon that encloses it. An old black hole is like a glass bottle in the hands of a skillful Murano glassblower who succeeds in making the volume of the bottle increase as its neck becomes narrower. 

At the moment of the leap from black to white, a black hole can therefore have an extremely small horizon and a vast interior. A tiny shell containing vast spaces, as in a fable.

In fables, we come across small huts that, when entered, turn out to contain hundreds of vast rooms. This seems impossible, the stuff of fairy tales. But it is not so. A vast space enclosed in a small sphere is concretely possible. 

If this seems bizarre to us, it is only because we became habituated to the idea that the geometry of space is simple: it is the one we studied at school, the geometry of Euclid. But it is not so in the real world. The geometry of space is distorted by gravity. The distortion permits a gigantic volume to be enclosed within a tiny sphere. The gravity of a Planck star generates such a huge distortion. 

An ant that has always lived on a large, flat plaza will be amazed when it discovers that through a small hole it has access to a large underground garage. Same for us with a black hole. What the amazement teaches is that we should not have blind confidence in habitual ideas: the world is stranger and more varied than we imagine. 

The existence of large volumes within small horizons has also generated confusion in the world of science. The scientific community has split and is quarreling about the topic. In the rest of this section, I tell you about this dispute. It is more technical than the rest — skip it if you like — but it is a picture of a lively, ongoing scientific debate. 

The disagreement concerns how much information you can cram into an entity with a large volume but a small surface. One part of the scientific community is convinced that a black hole with a small horizon can contain only a small amount of information. Another disagrees. 

What does it mean to “contain information”? 

More or less this: Are there more things in a box containing five large and heavy balls, or in a box that contains twenty small marbles? The answer depends on what you mean by “more things.” The five balls are bigger and weigh more, so the first box contains more matter, more substance, more energy, more stuff. In this sense there are “more things” in the box of balls. 

But the number of marbles is greater than the number of balls. In this sense, there are “more things,” more details, in the box of marbles. If we wanted to send signals, by giving a single color to each marble or each ball, we could send more signals, more colors, more information, with the marbles, because there are more of them. More precisely: it takes more information to describe the marbles than it does to describe the balls, because there are more of them. In technical terms, the box of balls contains more energy, whereas the box of marbles contains more information

An old black hole, considerably evaporated, has little energy, because the energy has been carried away via the Hawking radiation. Can it still contain much information, after much of its energy is gone? Here is the brawl.

Some of my colleagues convinced themselves that it is not possible to cram a lot of information beneath a small surface. That is, they became convinced that when most energy has gone and the horizon has become minuscule, only little information can remain inside. 

Another part of the scientific community (to which I belong) is convinced of the contrary. The information in a black hole—even a greatly evaporated one—can still be large. Each side is convinced that the other has gone astray. 

Disagreements of this kind are common in the history of science; one may say that they are the salt of the discipline. They can last long. Scientists split, quarrel, scream, wrangle, scuffle, jump at each other’s throats. Then, gradually, clarity emerges. Some end up being right, others end up being wrong. 

At the end of the nineteenth century, for instance, the world of physics was divided into two fierce factions. One of these followed Mach in thinking that atoms were just convenient mathematical fictions; the other followed Boltzmann in believing that atoms exist for real. The arguments were ferocious. Ernst Mach was a towering figure, but it was Boltzmann who turned out to be right. Today, we even see atoms through a microscope. 

I think that my colleagues who are convinced that a small horizon can contain only a small amount of information have made a serious mistake, even if at first sight their arguments seem convincing. Let’s look at these.

The first argument is that it is possible to compute how many elementary components (how many molecules, for example) form an object, starting from the relation between its energy and its temperature. We know the energy of a black hole (it is its mass) and its temperature (computed by Hawking), so we can do the math. The result indicates that the smaller the horizon, the fewer its elementary components. 

The second argument is that there are explicit calculations that allow us to count these elementary components directly, using both of the most studied theories of quantum gravity—string theory and loop theory. The two archrival theories completed this computation within months of each other in 1996. For both, the number of elementary components becomes small when the horizon is small.

These seem like strong arguments. On the basis of these arguments, many physicists have accepted a “dogma” (they call it so themselves): the number of elementary components contained in a small surface is necessarily small. Within a small horizon there can only be little information. If the evidence for this “dogma” is so strong, where does the error lie? 

It lies in the fact that both arguments refer only to the components of the black hole that can be detected from the outside, as long as the black hole remains what it is. And these are only the components residing on the horizon. Both arguments, in other words, ignore that there can be components in the large interior volume. These arguments are formulated from the perspective of someone who remains far from the black hole, does not see the inside, and assumes that the black hole will remain as it is forever. If the black hole stays this way forever—remember—those who are far from it will see only what is outside or what is right on the horizon. It is as if for them the interior does not exist. For them

But the interior does exist! And not only for those (like us) who dare to enter, but also for those who simply have the patience to wait for the black horizon to become white, allowing what was trapped inside to come out. In other words, to imagine that the calculations of the number of components of a black hole given by string theory or loop theory are complete is to have failed to take on board Finkelstein’s 1958 article. The description of a black hole from the outside is incomplete. 

The loop quantum gravity calculation is revealing: the number of components is precisely computed by counting the number of quanta of space on the horizon. But the string theory calculation, on close inspection, does the same: it assumes that the black hole is stationary, and is based on what is seen from afar. It neglects, by hypothesis, what is inside and what will be seen from afar after the hole has finished evaporating — when it is no longer stationary. 

I think that certain of my colleagues err out of impatience they want everything resolved before the end of evaporation, where quantum gravity becomes inevitable) and because they forget to take into account what is beyond that which can be immediately seen — two mistakes we all frequently make in life. 

Adherents to the dogma find themselves with a problem. They call it “the black hole information paradox.” They are convinced that inside an evaporated black hole there is no longer any information. Now, everything that falls into a black hole carries information. So a large amount of information can enter the hole. Information cannot vanish. Where does it go? 

To solve the paradox, the devotees of the dogma imagine that information escapes the hole in mysterious and baroque ways, perhaps in the folds of the Hawking radiation, like Ulysses and his companions escaping from the cave of the cyclops by hiding beneath sheep. Or they speculate that the interior of a black hole is connected to the outside by hypothetical invisible canals . . . Basically, they are clutching at straws—looking, like all dogmatists in difficulty, for abstruse ways of saving the dogma. 

But the information that enters the horizon does not escape by some arcane, magical means. It simply comes out after the horizon has been transformed from a black horizon into a white horizon.

In his final years, Stephen Hawking used to remark that there is no need to be afraid of the black holes of life: sooner or later, there will be a way out of them. There is — via the child white hole.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-white-holes-carlo-rovelli-riverhead-153058062.html?src=rss

What is going on with OpenAI and Sam Altman?

It’s been an eventful weekend at OpenAI’s headquarters in San Francisco. In a surprise move Friday, the company’s board of directors fired co-founder and CEO Sam Altman, which set off an institutional crisis that has seen senior staff resign in protest with nearly 700 rank-and-file employees threatening to do the same. Now the board is facing calls for its own resignation, even after Microsoft had already swooped in to hire Altman’s cohort away for its own AI projects. Here’s everything you need to know about the situation to hold your own at Thanksgiving on Thursday.

How it started

Thursday, November 16

This saga began forever ago by internet standards, or last Thursday in the common parlance. Per a tweet from former-company president Greg Brockman, that was when OpenAI’s head researcher and board member, Ilya Sutskever, contacted Altman to set up a meeting the following day at noon. In that same tweet chain (posted Friday night), Brockman accused the company of informing the first interim-CEO, OpenAI CTO Mira Murati, of the upcoming firings at that time as well:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19PM, Greg got a text from Ilya asking for a quick call. At 12:23PM, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

Friday, November 17

Everything kicked off at that Friday noon meeting. Brockman was informed that he would be demoted — removed from the board but remain president of the company, reporting to Murati once she’s installed. Barely ten minutes later, Brockman alleges, Altman was informed of his termination as the public announcement was published. Sutskever subsequently sent a company-wide email stating that “Change can be scary,” per The Information.

Later that afternoon, the OpenAI board along with new CEO Murati addressed a “shocked” workforce in an all-hands meeting. During that meeting, Sutskever reportedly told employees the moves will ultimately “make us feel closer."

At this point, Microsoft, which just dropped a cool $10 billion into OpenAI’s coffers in January as part of a massive, multi-year investment deal with the company weighed in on the day’s events. CEO Satya Nadella released the following statement:

As you saw at Microsoft Ignite this week, we’re continuing to rapidly innovate for this era of AI, with over 100 announcements across the full tech stack from AI systems, models and tools in Azure, to Copilot. Most importantly, we’re committed to delivering all of this to our customers while building for the future. We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team. Together, we will continue to deliver the meaningful benefits of this technology to the world.

By Friday evening, things really began to spiral. Brockman announced via Twitter that he is quitting in protest. Director of research Jakub Pachocki and head of preparedness Aleksander Madry announced that they too are resigning in solidarity.

How it’s going

Saturday/Sunday, November 18/19

On Saturday, November 18, the backtracking begins. Altman’s Friday termination notice states that, “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

The following morning, OpenAI COO Brad Lightcap wrote in internal communications obtained by Axios that the decision “took [the management team] by surprise” and that management had been in conversation “with the board to try to better understand the reasons and process behind their decision.”

“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices,” Lightcap wrote. “This was a breakdown in communication between Sam and the board … We still share your concerns about how the process has been handled, are working to resolve the situation, and will provide updates as we’re able.”

A report from The Information midmorning Saturday revealed that OpenAI’s prospective share sale being led by Thrive Capital, valued at $86 billion, is in jeopardy following Altman’s firing. Per three unnamed sources within the company, even if the sale does go through, it will likely be at a lower valuation. The price of OpenAI shares has tripled since the start of the year, and quadrupled since 2021, so current and former employees, many of whom were offered stock as hiring incentives, were in line for a big payout. A payout might not be coming anymore.

On Saturday afternoon, Altman announced on Twitter that he would be forming a new AI startup with Brockman’s assistance, potentially doing something with AI chips to counter NVIDIA’s dominance in the sector. At this point OpenAI’s many investors, rightly concerned that their money was about to go up in generative smoke, began pressuring the board of directors to reinstate Altman and Brockman.

We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett…

— Satya Nadella (@satyanadella) November 20, 2023

Microsoft’s Satya Nadella reportedly led that charge. Bloomberg’s sources say Nadella was “furious” over the decision to oust Altman — especially having been given just “a few minutes” of notice before the public announcement was made — even going so far as to recruit Altman and his cohort for their own AI efforts.

Microsoft also has leverage in the form of its investment, much of which is in the form of cloud compute credits (which the GPT platform needs to operate) rather than hard currency. Denying those credits to OpenAI would effectively hobble the startup’s operations.

Interim-CEO Mira Murati’s 48-hour tenure at the head of OpenAI came to an end on Sunday when the board named Twitch co-founder Emmett Shear as the new interim-CEO. According to Bloomberg reporter Ashley Vance, Murati had planned to hire Altman and Brockman back in a move designed to force the board of directors into action. Instead, the board “went into total silence” and “found their own CEO Emmett Shear.” Altman spent Sunday at OpenAI HQ, posting an image of himself holding up a green “Guest” badge.

“First and last time i ever wear one of these,” he wrote.

Monday, November 20

On Monday morning, an open letter from more than 500 OpenAI employees circulated online. The group threatened to quit and join the new Microsoft subsidiary unless the board itself resigns and brings back Altman and Brockman (and presumably the other two as well). The number of signatories has since grown to nearly 700.

Breaking: 505 of 700 employees @OpenAI tell the board to resign. pic.twitter.com/M4D0RX3Q7a

— Kara Swisher (@karaswisher) November 20, 2023

Doesn’t look like that will be happening, however — despite Sutskever’s early morning mea culpa. The board has already missed its deadline to respond to the open letter, Microsoft has already hired away both Altman and Brockman and Shear has already been named interim-CEO.

Shear stepped down as CEO of Twitch in March, where he led the company for more than 16 years and has been working as a partner at Y Combinator for the past seven months. Amazon acquired the live video streaming app in 2014 for just under $1 billion.

“I took this job because I believe that OpenAI is one of the most important companies currently in existence. When the board shared the situation and asked me to take the role, I did not make the decision lightly,” Shear told OpenAI employees Monday.

“Ultimately I felt that I had a duty to help if I could,” he added.

Shear was quick to point out that Altman’s termination was “handled very badly, which has seriously damaged our trust.” As such he announced the company will hire an independent investigator to report on the run-up to Friday’s SNAFU.

“The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that,” Shear continued. “I’m not crazy enough to take this job without board support for commercializing our awesome models.”

Following his departure to Microsoft on Monday, Alman posted, “the OpenAI leadership team, particularly mira brad and jason but really all of them, have been doing an incredible job through this that will be in the history books.”

“Incredibly proud of them,” he wrote.

This is a developing story. Please check back for updates.

This article originally appeared on Engadget at https://www.engadget.com/what-is-going-on-with-openai-and-sam-altman-215725312.html?src=rss

Stadium card stunts and the art of programming a crowd

With college bowl season just around the corner, football fans across the nation will be dazzled, not just by the on-field action, but also by the intricate "card stunts" performed by members of the stadium's audience. The highly-coordinated crowd work is capable of producing detailed images that resemble the pixelated images on computer screens — and which are coded in much the same manner.  

Michael Littman's new book, Code to Joy: Why Everyone Should Learn a Little Programming, is filled with similar examples of how the machines around us operate and how we need not distrust an automaton-filled future so long as we learn to speak their language (at least until they finish learning ours). From sequencing commands to storing variables, Code to Joy provides an accessible and entertaining guide to the very basics of programming for fledgling coders of all ages.  

MIT Press

Excerpted from Code to Joy: Why Everyone Should Learn a Little Programming by Michael L Littman. Published by MIT Press. Copyright © 2023 by Michael L Littman. All rights reserved.


“GIMME A BLUE!”

Card stunts, in which a stadium audience holds up colored signs to make a giant, temporary billboard, are like flash mobs where the participants don’t need any special skills and don’t even have to practice ahead of time. All they have to do is show up and follow instructions in the form of a short command sequence. The instructions guide a stadium audience to hold aloft the right poster-sized colored cards at the right time as announced by a stunt leader. A typical set of card-stunt instructions begins with instructions for following the instructions: 

  • listen to instructions carefully 

  • hold top of card at eye level (not over your head) 

  • hold indicated color toward field (not facing you) 

  • pass cards to aisle on completion of stunts (do not rip up the cards)

These instructions may sound obvious, but not stating them surely leads to disaster. Even so, you know there’s gotta be a smart alec who asks afterward, “Sorry, what was that first one again?” It’s definitely what I’d do. 

Then comes the main event, which, for one specific person in the crowd, could be the command sequence: 

  1. Blue 

  2. Blue 

  3. Blue 

Breathtaking, no? Well, maybe you have to see the bigger picture. The whole idea of card stunts leverages the fact that the members of a stadium crowd sit in seats arranged in a grid. By holding up colored rectangular sign boards, they transform themselves into something like a big computer display screen. Each participant acts as a single picture element— person pixels! Shifts in which cards are being held up change the image or maybe even cause it to morph like a larger-than-life animated gif. 

Card stunts began as a crowd-participation activity at college sports in the 1920s. They became much less popular in the 1970s when it was generally agreed that everyone should do their own thing, man. In the 1950s, though, there was a real hunger to create ever more elaborate displays. Cheer squads would design the stunts by hand, then prepare individual instructions for each of a thousand seats. You’ve got to really love your team to dedicate that kind of energy. A few schools in the 1960s thought that those newfangled computer things might be helpful for taking some of the drudgery out of instruction preparation and they designed programs to turn sequences of hand-drawn images into individualized instructions for each of the participants. With the help of computers, people could produce much richer individualized sequences for each person pixel that said when to lift a card, what color to lift, and when to put it down or change to another card. So, whereas the questionnaire example from the previous section was about people making command sequences for the computer to follow, this example is about the computer making command sequences for people to follow. And computer support for automating the process of creating command sequences makes it possible to create more elaborate stunts. That resulted in a participant’s sequence of commands looking like:

  • up on 001 white 

  • 003 blue 

  • 005 white 

  • 006 red 

  • 008 white 

  • 013 blue 

  • 015 white 

  • 021 down 

  • up on 022 white 

  • 035 down 

  • up on 036 white 

  • 043 blue 

  • 044 down 

  • up on 045 white 

  • 057 metallic red 

  • 070 down

Okay, it’s still not as fun to read the instructions as to see the final product—in this actual example, it’s part of an animated Stanford “S.” To execute these commands in synchronized fashion, an announcer in the stadium calls out the step number (“Forty-one!”) and each participant can tell from his or her instructions what to do (“I’m still holding up the white card I lifted on 36, but I’m getting ready to swap it for a blue card when the count hits 43”). 

As I said, it’s not that complicated for people to be part of a card stunt, but it’s a pretty cool example of creating and following command sequences where the computer tells us what to do instead of the other way around. And, as easy as it might be, sometimes things still go wrong. At the 2016 Democratic National Convention, Hillary Clinton’s supporters planned an arena-wide card stunt. Although it was intended to be a patriotic display of unity, some attendees didn’t want to participate. The result was an unreadable mess that, depressingly, was supposed to spell out “Stronger Together.” 

These days, computers make it a simple matter to turn a photograph into instructions about which colors to hold up where. Essentially, any digitized image is already a set of instructions for what mixture of red, blue, and green to display at each picture position. One interesting challenge in translating an image into card-stunt instructions is that typical images consist of millions of colored dots (megapixels), whereas a card stunt section of a stadium has maybe a thousand seats. Instead of asking each person to hold up a thousand tiny cards, it makes more sense to compute an average of the colors in that part of the image. Then, from the collection of available colors (say, the classic sixty-four Crayola options), the computer just picks the closest one to the average. 

If you think about it, it’s not obvious how a computer can average colors. You could mix green and yellow and decide that the result looks like the spring green crayon, but how do you teach a machine to do that? Let’s look at this question a little more deeply. It’ll help you get a sense of how computers can help us instruct them better. Plus, it will be our entry into the exciting world of machine learning. 

There are actually many, many ways to average colors. A simple one is to take advantage of the fact that each dot of color in an image file is stored as the amount of red, green, and blue color in it. Each component color is represented as a whole number between 0 and 255, where 255 was chosen because it’s the largest value you can make with eight binary digits, or bits. Using quantities of red-blue-green works well because the color receptors in the human eye translate real-world colors into this same representation. That is, even though purple corresponds to a specific wavelength of light, our eyes see it as a particular blend of green, blue, and red. Show someone that same blend, and they’ll see purple. So, to summarize a big group of pixels, just average the amount of blue in those pixels, the amount of red in those pixels, and the amount of green in those pixels. That basically works. Now, it turns out, for a combination of physical, perceptual, and engineering reasons, you get better results by squaring the values before averaging, and square rooting the values after averaging. But that’s not important right now. The important thing is that there is a mechanical way to average a bunch of colored dots to get a single dot whose color summarizes the group. 

Once that average color is produced, the computer needs a way of finding the closest color to the cards we have available. Is that more of a burnt sienna or a red-orange? A typical (if imperfect) way to approximate how similar two colors are using their red-blue-green values is what’s known as the Euclidean distance formula. Here’s what that looks like as a command sequence:

  • take the difference between the amount of red in the two colors square it 

  • take the difference between the amount of blue in the two colors square it 

  • take the difference between the amount of green in the two colors square it add the three squares together 

  • take the square root

So to figure out what card should be held up to best capture the average of the colors in the corresponding part of the image, just figure out which of the available colors (blue, yellow green, apricot, timberwolf, mahogany, periwinkle, etc.) has the smallest distance to that average color at that location. That’s the color of the card that should be given to the pixel person sitting in that spot in the grid. 

The similarity between this distance calculation and the color averaging operation is, I’m pretty sure, just a coincidence. Sometimes a square root is just a square root. 

Stepping back, we can use these operations — color averaging and finding the closest color to the average — to get a computer to help us construct the command sequence for a card stunt. The computer takes as input a target image, a seating chart, and a set of available color cards, and then creates a map of which card should be held up in each seat to best reproduce the image. In this example, the computer mostly handles bookkeeping and doesn’t have much to do in terms of decision-making beyond the selection of the closest color. But the upshot here is that the computer is taking over some of the effort of writing command sequences. We’ve gone from having to select every command for every person pixel at every moment in the card stunt to selecting images and having the computer generate the necessary commands. 

This shift in perspective opens up the possibility of turning over more control of the command-sequence generation process to the machine. In terms of our 2 × 2 grid from chapter 1, we can move from telling (providing explicit instructions) to explaining (providing explicit incentives). For example, there is a variation of this color selection problem that is a lot harder and gives the computer more interesting work to do. Imagine that we could print up cards of any color we needed but our print shop insists that we order the cards in bulk. They can only provide us with eight different card colors, but we can choose any colors we want to make up that eight. (Eight is the number of different values we can make with 3 bits — bits come up a lot in computing.) So we could choose blue, green, blue-green, blue-violet, cerulean, indigo, cadet blue, and sky blue, and render a beautiful ocean wave in eight shades of blue. Great! 

But then there would be no red or yellow to make other pictures. Limiting the color palette to eight may sound like a bizarre constraint, but it turns out that early computer monitors worked exactly like that. They could display any of millions of colors, but only eight distinct ones on the screen at any one time. 

With this constraint in mind, rendering an image in colored cards becomes a lot trickier. Not only do you have to decide which color from our set of color options to make each card, just as before, but you have to pick which eight colors will constitute that set of color options. If we’re making a face, a variety of skin tones will be much more useful than distinctions among shades of green or blue. How do we go from a list of the colors we wish we could use because they are in the target image to the much shorter list of colors that will make up our set of color options? 

Machine learning, and specifically an approach known as clustering or unsupervised learning, can solve this color-choice problem for us. I will tell you how. But first let’s delve into a related problem that comes from turning a face into a jigsaw puzzle. As in the card-stunt example, we’re going to have the computer design a sequence of commands for rendering a picture. But there’s a twist—the puzzle pieces available for constructing the picture are fixed in advance. Similar to the dance-step example, it will use the same set of commands and consider which sequence produces the desired image.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-code-to-joy-michael-l-littman-mit-press-153036241.html?src=rss

OpenAI potentially considering reinstating its freshly-ousted CEO Sam Altman

Following his surprise firing on Friday, currently-former OpenAI CEO Sam Altman might not be as out of a job as we initially thought he was, according to new word from The Verge on Saturday. Reportedly, sources close to Altman say that the board itself, in a stunning reversal, have "agreed in principal" to resign while reinstating him to his former position. However, the board has has since reportedly missed a 5pm PT deadline regarding the decision.

Shortly after Altman's firing on Friday afternoon, several senior staffers — including former Chairman and President Greg Brockman, Director of Research Jakub Pachocki, Head of Preparedness Aleksander Madry and Senior Researcher Szymon Sidor — tendered their resignations in protest. More departures are reportedly still incoming as of publication of this story. Numerous additional OpenAI staffers were set to quit in solidarity at that meeting. They're reportedly willing to follow Altman, a la Jerry Maguire, to a new AI startup venture should he decide to launch one. 

An internal memo circulated after Altman's dismissal argued that his termination was not related to "malfeasance or anything related to our financial, business, safety, or security/privacy practices,” per Axios' reporting.

Sam and I are shocked and saddened by what the board did today.

Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.

We too are still trying to figure out exactly…

— Greg Brockman (@gdb) November 18, 2023

Microsoft is a major investor in the OpenAI venture — having injected some $13 billion into the project's coffers this past January as part of a long term partnership between the two. It maintains the "utmost confidence" in OpenAI interim-CEO Mira Murati, and "remains confident" in the partnership overall. 

Despite those assurances, rank-and-file employees were given little notice prior to the official announcement going out (Altman himself receiving even less) of the change in leadership. Altman had, in the days leading up to his termination, remained an active supporter and recruiter for the firm, appearing at the Asia-Pacific Economic Cooperation forum less than a day prior to his firing. 

According to the New York Times, neither Altman nor Brockman are guaranteed a return to power, largely on account of the company's non-profit origins, which preclude investors from directing company-wide decisions. They instead leave those choices to members of the board itself. Altman and Brockman were both members of the OpenAI board. However, with their departures, only lead researcher, Ilya Sutskever; Quora CEO Adam D’Angelo; director of strategy at Georgetown’s Center for Security and Emerging Technology Helen Toner; and computer scientist Tasha McCauley remain members — at least, through the weekend.

This article originally appeared on Engadget at https://www.engadget.com/openai-potentially-considering-reinstating-its-freshly-ousted-ceo-sam-altman-051223213.html?src=rss

OpenAI CEO Sam Altman ousted as 'board no longer has confidence' in his leadership

In a surprise shakeup of its c-suite Friday, OpenAI's board of directors announced that CEO Sam Altman is leaving both the company and the board, effective immediately. Chief Technology Officer Mira Murati has been named interim CEO.

Altman oustering reportedly follows an internal "deliberative review process" which found he had not been "consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," the company announced. As such, "the board no longer has confidence in his ability to continue leading OpenAI."

The board of directors thanked Altman' for his "many contributions to the founding and growth of OpenAI," but believes that "as the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”

This is a developing story. Please check back for updates.

This article originally appeared on Engadget at https://www.engadget.com/openai-ceo-sam-altman-ousted-as-board-no-longer-has-confidence-in-his-leadership-204924006.html?src=rss

What happened to Washington's wildlife after the largest dam removal in US history

The man made flood that miraculously saved our heroes at the end of O Brother Where Art Thou were an actual occurrence in the 19th and 20th century — and a fairly common one at that — as river valleys across the American West were dammed up and drowned out at the altar of economic progress and electrification. Such was the case with Washington State's Elwha river in the 1910s. Its dam provided the economic impetus to develop the Olympic Peninsula but also blocked off nearly 40 miles of river from the open ocean, preventing native salmon species from making their annual spawning trek. However, after decades of legal wrangling by the Lower Elwha Klallam Tribe, the biggest dams on the river today are the kind made by beavers. 

In this week's Hitting the Books selection, Eat, Poop, Die: How Animals Make Our World, University of Vermont conservation biologist Joe Roman recounts how quickly nature can recover when a 108-foot tall migration barrier is removed from the local ecosystem. This excerpt discusses the naturalists and biologists who strive to understand how nutrients flow through the Pacific Northwest's food web, and the myriad ways it's impacted by migratory salmon. The book as a whole takes a fascinating look at how the most basic of biological functions (yup, poopin!) of even just a few species can potentially impact life in every corner of the planet.   

Hatchette Books

Excerpted from by Eat, Poop, Die: How Animals Make Our World by Joe Roman. Published by Hachette Book Group. Copyright © 2023 by Joe Roman. All rights reserved.


When construction began in 1910, the Elwha Dam was designed to attract economic development to the Olympic Peninsula in Washington, supplying the growing community of Port Angeles with electric power. It was one of the first high-head dams in the region, with water moving more than a hundred yards from the reservoir to the river below. Before the dam was built, the river hosted ten anadromous fish runs. All five species of Pacific salmon — pink, chum, sockeye, Chinook, and coho — were found in the river, along with bull trout and steelhead. In a good year, hundreds of thousands of salmon ascended the Elwha to spawn. But the contractors never finished the promised fish ladders. As a result, the Elwha cut off most of the watershed from the ocean and 90 percent of migratory salmon habitat.

Thousands of dams block the rivers of the world, decimating fish populations and clogging nutrient arteries from sea to mountain spring. Some have fish ladders. Others ship fish across concrete walls. Many act as permanent barriers to migration for thousands of species.

By the 1980s, there was growing concern about the effect of the Elwha on native salmon. Populations had declined by 95 per cent, devastating local wildlife and Indigenous communities. River salmon are essential to the culture and economy of the Lower Elwha Klallam Tribe. In 1986, the tribe filed a motion through the Federal Energy Regulatory Commission to stop the relicensing of the Elwha Dam and the Glines Canyon Dam, an upstream impoundment that was even taller than the Elwha. By blocking salmon migration, the dams violated the 1855 Treaty of Point No Point, in which the Klallam ceded a vast amount of the Olympic Peninsula on the stipulation that they and all their descendants would have “the right of taking fish at usual and accustomed grounds.” The tribe partnered with environmental groups, including the Sierra Club and the Seattle Audubon Society, to pressure local and federal officials to remove the dams. In 1992, Congress passed the Elwha River Ecosystem and Fisheries Restoration Act, which authorized the dismantling of the Elwha and Glines Canyon Dams.

The demolition of the Elwha Dam was the largest dam-removal project in history; it cost $350 million and took about three years. Beginning in September 2011, coffer dams shunted water to one side as the Elwha Dam was decommissioned and destroyed. The Glines Canyon was more challenging. According to Pess, a “glorified jackhammer on a floating barge” was required to dismantle the two-hundred-foot impoundment. The barge didn’t work when the water got low, so new equipment was helicoptered in. By 2014, most of the dam had come down, but rockfall still blocked fish passage. It took another year of moving rocks and concrete before the fish had full access to the river.

The response of the fish was quick, satisfying, and sometimes surprising. Elwha River bull trout, landlocked for more than a century, started swimming back to the ocean. The Chinook salmon in the watershed increased from an average of about two thousand to four thousand. Many of the Chinook were descendants of hatchery fish, Pess told me over dinner at Nerka. “If ninety percent of your population prior to dam removal is from a hatchery, you can’t just assume that a totally natural population will show up right away.” Steelhead trout, which had been down to a few hundred, now numbered more than two thousand.

Within a few years, a larger mix of wild and local hatchery fish had moved back to the Elwha watershed. And the surrounding wildlife responded too. The American dipper, a river bird, fed on salmon eggs and insects infused with the new marine-derived nutrients. Their survival rates went up, and the females who had access to fish became healthier than those without. They started having multiple broods and didn’t have to travel so far for their food, a return, perhaps, to how life was before the dam. A study in nearby British Columbia showed that songbird abundance and diversity increased with the number of salmon. They weren’t eating the fish — in fact, they weren’t even present during salmon migration. But they were benefiting from the increase in insects and other invertebrates.

Just as exciting, the removal of the dams rekindled migratory patterns that had gone dormant. Pacific lamprey started traveling up the river to breed. Bull trout that had spent generations in the reservoir above the dam began migrating out to sea. Rainbow trout swam up and down the river for the first time in decades. Over the years, the river started to look almost natural as the sediments that had built up behind the dams washed downstream.

The success on the Elwha could be the start of something big, encouraging the removal of other aging dams. There are plans to remove the Enloe Dam, a fifty-four-foot concrete wall in northern Washington, which would open up two hundred miles of river habitat for steelhead and Chinook salmon. Critically endangered killer whales, downstream off the coast of the Pacific Northwest, would benefit from this boost in salmon, and as there are only seventy individuals remaining, they need every fish they can get.

The spring Chinook salmon run on the Klamath River in Northern California is down 98 percent since eight dams were constructed in the twentieth century. Coho salmon have also been in steep decline. In the next few years, four dams are scheduled to come down with the goal of restoring salmon migration. Farther north, the Snake River dams could be breached to save the endangered salmon of Washington State. If that happens, historic numbers of salmon could come back — along with the many species that depended on the energy and nutrients they carry upstream.

Other dams are going up in the West — dams of sticks and stones and mud. Beaver dams help salmon by creating new slow-water habitats, critical for juvenile salmon. In Washington, beaver ponds cool the streams, making them more productive for salmon. In Alaska, the ponds are warmer, and the salmon use them to help metabolize what they eat. Unlike the enormous concrete impoundments, designed for stability, beaver dams are dynamic, heterogeneous landscapes that salmon can easily travel through. Beavers eat, they build dams, they poop, they move on. We humans might want things to be stable, but Earth and its creatures are dynamic.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-eat-poop-die-joe-roman-hatchette-books-153032502.html?src=rss

Humane's Ai Pin costs $699 and ships in early 2024, which is about all we know for certain

Wearable startup Humane AI has been dripping details about its upcoming device, the AI Pin, for months now. We firs saw it at a TED Talk in May and, more recently, got a glimpse of its promised capabilities at Paris Fashion Week, ahead of Thursday's official unveiling. However many questions regarding how the wearable AI will actually do what it says it will remain to be answered.

Here's what we do know: The Humane AI Pin is a pocket-worn wearable AI assistant that can reportedly perform the tasks that many modern cellphones and digital assistants do, but in a radically different form factor. It has no screen, instead reportedly operating primarily through voice commands and occasionally through a virtual screen projected onto the user's hand. It costs $700 plus another $24 because Humane insisted on launching its own MVNO (mobile virtual network operator) on top of T-Mobile's network. That $24/month "Humane Subscription" includes a dedicated cell phone number for the Pin with unlimited talk, text and data, rather than allow the device to tether to your existing phone. 

Humane AI

The device itself will be available in three colors — Eclipse, Equinox, and Lunar — when orders begin shipping in early 2024. The magnetic clip that affixes the device to your clothing doubles as the battery storage and includes a pair of backup batteries for users to keep with them. The AI Pin also sports an ultra-wide RGB camera, depth and motion sensors, , all of which allow "the device to see the world as you see it," per the company's release.

The AI Pin will reportedly run on a Snapdragon processor with a dedicated Qualcomm AI Engine supporting its custom Cosmos OS. Its "entirely new AI software framework, the Ai Bus," reportedly removes the need to actually download content to the device itself. Instead, it "quickly understands what you need, connecting you to the right AI experience or service instantly." Collaborations with both Microsoft and OpenAI will reportedly give the AI Pin, "access to some of the world’s most powerful AI models and platforms." 

There is still much we don't know about the AI Pin, however, like how long each battery module lasts and how sensitive the system's anti-tamper system is that will lock down a "compromised" device. Live demonstrations of the technology have been rare to date and hands-on opportunities nearly nonexistent. Humane is hosting a debut event Thursday afternoon where, presumably, functional iterations of the AI Pin will be on display.

This article originally appeared on Engadget at https://www.engadget.com/humanes-ai-pin-costs-699-and-ships-in-early-2024-which-is-about-all-we-know-for-certain-181048809.html?src=rss

Google's AI-empowered search feature goes global with expansion to 120 countries

Google's Search Generative Experience (SGE), which currently provides generative AI summaries at the top of the search results page for select users, is about to be much more available. Just six months after its debut at I/O 2023, the company announced Wednesday that SGE is expanding to Search Labs users in 120 countries and territories, gaining support for four additional languages and receiving a handful of helpful new features.

Unlike its frenetic rollout of the Bard chatbot in March, Google has taken a slightly more measured tone in distributing its AI search assistant. The company began with English language searches in the US in May, expanded to English-language users in India and Japan in August and on to teen users in September. As of Wednesday, users from Brazil to Bhutan can give the feature a try. In addition to English, SGE now supports Spanish, Portuguese, Korean and Indonesian (in addition to the existing English, Hindi and Japanese) so you'll be able to search and converse with the assistant in natural language, whichever form it might take. These features arrive on Chrome desktop Wednesday with the Search Labs for Android app versions slowly rolling out over the coming week.

Among SGE's new features is an improved follow-up function where users can ask additional questions of the assistant directly on the search results page. Like a mini-Bard window tucked into the generated summary, the new feature enables users to drill down on a subject without leaving the results page or even needing to type their queries out. Google will reportedly restrict ads to specific, denoted, areas of the page so as to avoid confusion between them and the generated content. Users can expect follow-ups to start showing up in the coming weeks. They're only for English language users in the US to start but will likely expand as Google continues to iterate the technology. 

SGE will start helping with clarifying ambiguous translation terms as well. For example, if you're trying to translate "Is there a tie?" into Spanish, both the output, the gender and speaker's intention are going to change if you're talking about a tie, as in a draw between two competitors (e.g. "un empate") and for the tie you wear around your neck ("una corbata"). This new feature will automatically recognize such words and highlight them for you to click on, which pops up a window asking you to pick between the two versions. This is going to be super helpful with languages that, say, think of cars as boys but bicycles as girls, and you need to specify the version you're intending. Luckily, Spanish is one of those languages and this capability is coming first to US users for English-to-Spanish translations.

Finally, Google plans to expand its interactive definitions normally found in the generated summaries for educational topics like science, history or economics to coding and health related searches as well. This update should arrive within the next month, again, first for English language users in the US before spreading to more territories in the coming months. 

This article originally appeared on Engadget at https://www.engadget.com/googles-ai-empowered-search-feature-goes-global-with-expansion-to-120-countries-180028084.html?src=rss

Google's AI-powered search feature goes global with a 120-country expansion

Google's Search Generative Experience (SGE), which currently provides generative AI summaries at the top of the search results page for select users, is about to be much more available. Just six months after its debut at I/O 2023, the company announced Wednesday that SGE is expanding to Search Labs users in 120 countries and territories, gaining support for four additional languages and receiving a handful of helpful new features.

Unlike its frenetic rollout of the Bard chatbot in March, Google has taken a slightly more measured tone in distributing its AI search assistant. The company began with English language searches in the US in May, expanded to English-language users in India and Japan in August and on to teen users in September. As of Wednesday, users from Brazil to Bhutan can give the feature a try. In addition to English, SGE now supports Spanish, Portuguese, Korean and Indonesian (in addition to the existing English, Hindi and Japanese) so you'll be able to search and converse with the assistant in natural language, whichever form it might take. These features arrive on Chrome desktop Wednesday with the Search Labs for Android app versions slowly rolling out over the coming week.

Among SGE's new features is an improved follow-up function where users can ask additional questions of the assistant directly on the search results page. Like a mini-Bard window tucked into the generated summary, the new feature enables users to drill down on a subject without leaving the results page or even needing to type their queries out. Google will reportedly restrict ads to specific, denoted, areas of the page so as to avoid confusion between them and the generated content. Users can expect follow-ups to start showing up in the coming weeks. They're only for English language users in the US to start but will likely expand as Google continues to iterate the technology. 

SGE will start helping with clarifying ambiguous translation terms as well. For example, if you're trying to translate "Is there a tie?" into Spanish, both the output, the gender and speaker's intention are going to change if you're talking about a tie, as in a draw between two competitors (e.g. "un empate") and for the tie you wear around your neck ("una corbata"). This new feature will automatically recognize such words and highlight them for you to click on, which pops up a window asking you to pick between the two versions. This is going to be super helpful with languages that, say, think of cars as boys but bicycles as girls, and you need to specify the version you're intending. Luckily, Spanish is one of those languages and this capability is coming first to US users for English-to-Spanish translations.

Finally, Google plans to expand its interactive definitions normally found in the generated summaries for educational topics like science, history or economics to coding and health related searches as well. This update should arrive within the next month, again, first for English language users in the US before spreading to more territories in the coming months. 

This article originally appeared on Engadget at https://www.engadget.com/googles-ai-powered-search-feature-goes-global-with-a-120-country-expansion-180028037.html?src=rss