Posts with «personal investing ideas & strategies» label

Offworld 'company towns' are the wrong way to settle the solar system

Company Towns — wherein a single firm provides most or all necessary services, from housing and employment to commerce and amenities to a given community — have dotted America since before the Civil War. As we near the end of the first quarter of the 21st century, they're making a comeback with a new generation of ultra-wealthy elites gobbling up land and looking to build towns in their own image

And why should only terrestrial workers be exploited? Elon Musk has long talked of his plans to colonize Mars through his company SpaceX and those plans don't happen without a sizeable — and in this case, notably captive — workforce on hand. The same Elon Musk who spent $44 billion to run a ubiquitous social media site into the ground, whose brain computer interface company can't stop killing monkeys and whose automotive company can't stop killing pedestrians, wants to construct entire settlements wholly reliant on his company's largesse and logistics train. Are we really going to trust the mercurial CEO with people's literal air supplies?

In this week's Hitting the Books, Rice University biologist and podcaster Kelly Weinersmith and her husband Zach (of Saturday Morning Breakfast Cereal fame) examine what it will actually take to put people on the red planet and what unforeseen costs we might have to pay to accomplish such a goal in their new book A City on Mars: Can we settle space, should we settle space, and have we really thought this through?

Penguin Random House

Excerpted from A City on Mars: Can we settle space, should we settle space, and have we really thought this through? by Kelly and Zach Weinersmith. Published by Penguin. Copyright © 2023 by Kelly and Zach Weinersmith. All rights reserved.


On the Care and Feeding of Space Employees

One of the first things to know about company towns is that companies don’t appear to want to be in charge of housing. In our experience, people often think housing was an actively pursued control tactic, but if you look at the available data and the oral histories, companies often seem downright reluctant to supply housing at all. In Dr. Price Fishback’s economic analysis of coal towns in early-twentieth-century Appalachia, Soft Coal, Hard Choices, he found that companies able to have a third party supply housing typically did. This is hard to square with the idea that housing was built specifically with sinister intentions.

There are also good theoretical reasons to explain why companies build housing and rent it out to workers. Suppose Elon Musk is building the space city Muskow. Having wisely consulted the nearest available Weinersmith, he decides he shouldn’t own employee housing due to something or other about the risks of power imbalance. He looks to hire builders, but immediately runs into a problem: very few companies are available for construction on Mars. Let’s consider the simple case where only one company is willing to do it.

Well, guess what. That company now has monopoly power. They can raise home prices or lower home quality, making Muskow less attractive to potential workers. Musk can now only improve the situation by paying workers more, costing him money while lining the pockets of the housing provider.

If he wants to avoid this, Musk’s ideal option is to attract more building companies, so they can compete with each other. If that’s not possible, as was often the case in remote company towns, then the only alternative is to build the housing himself. This works, but the tradeoff is that he’s now managing housing in addition to focusing on his core business. He’s also acquired a lot of control over his employees. None of this setup requires Musk to be a power-hungry bastard — all it requires is that he needs to attract workers to a place where there’s zero competition for housing construction.

Historically, where things get more worrisome is in rental agreements, which often tied housing to employment. Even these can partially be explained as rational choices a non- evil bastard might non- evilly make. Workers in mines were often temporary. Mines were temporary, too, existing only until the resources were no longer profitable. This made homeownership a less compelling prospect for a worker. Why? Two reasons. First, if a town may suddenly fold in fifteen years because a copper mine stops being profitable, buying a house is a bad investment. Second, if you own a home, it’s hard for you to leave. This is a problem because threatening to leave is a classic way to enhance your bargaining position as a worker.

Once you have people whose housing is tied to their job, the potential for abuse is enormous — especially during strikes. Rental agreements were often tied to employment, and so striking or even having an injury could mean the loss of your home. When your boss is also your landlord, their ability to threaten you and your family is tremendous, and indeed narrative accounts refer to eviction of families with children by force. If employees either owned their homes or had more secure rental agreements, power would have run the other way. They could have struck for better wages or conditions and occupied those homes to make it harder for their employer to bring in replacements.

It may be tempting to see this as a purely capitalist problem, but very similar results occurred in Soviet monotown housing. Employees tended to get reasonably nice company-town housing; if they lost their jobs, they had to go to the local Soviet, which provided far worse accommodations. As one author put it, “Thus, housing became the method of controlling workers par excellence.” This suggests that there’s a deep structural dynamic here — when your employer owns your housing, they’re apt to use it against you at some point.

In space, you can’t kick people out of their houses unless you’re prepared to kill them or pay for a pricey trip home. On Mars, orbital mechanics may preclude the trip even if you’re able to afford it. In arguing with space-settlement geeks, housing concerns are often set up as binaries — “Look, they’re not going to kill the employees, so they’ll have to treat them well.” In fact, there’s a spectrum of bastardry available. A company-town boss on Mars could provide lower-quality food, reduce floor space, restrict the flow of beet wine, deny you access to the pregnodrome. They could also tune your atmosphere. We found one account by a British submariner, in which he claimed to adjust the balance of oxygen to carbon dioxide depending on whether he wanted people more lethargic or more active. Whether it’ll be worth the risk of pissing off employees who cost, at least, millions to deliver to the settlement is harder to say.

This overall logic — companies must supply amenities, therefore companies acquire power — repeats across contexts in company towns. To attract skilled employees who may have families, the company must supply housing, yes, but they also must supply other regular town stuff — shopping, entertainment, festivals, sanitation, roads, bridges, municipal planning, schools, temples, churches. When one company controls shopping, they set the prices and they know what you buy. When they control entertainment and worship, they have power over employee speech and behavior. When they control schools, they have power over what is taught. When they control the hospitals, they control who gets health care, and how much.

Even if the company does a decent job on all these fronts, there may still be resistance, basically because people don’t love having so much of their lives controlled by one entity. Fishback argued that company towns, for all their issues, were not as bad as their reputation. In theorizing why, he suggested one problem you might call the omni-antagonist effect. Think about what groups you’re most likely to be angry at during any given moment of adult life. Landlord? Home-repair company? Local stores? Utility companies? Your homeowners association? Local governance? Health-care service? Chances are you’re mad at someone on this list even as you read this book. Now, imagine all are merged into a single entity that is also your boss.

In space, as usual, things are worse: the infrastructure and utility people aren’t just keeping the toilet and electricity running; they’re deciding how much CO2 is in your air and controlling transportation in and out of town. Even if the company is not evil, it’s going to be hard to keep good relations, even at the best of times.

And it will not always be the best of times.

When Company Towns Go Bad

Unionization attempts on September 3, 1921, reporting on the then ongoing miners strike in West Virginia, the Associated Press released the following bulletin:

Sub district President Blizzard of the United Mine Workers . . . says five airplanes sent up from Logan county dropped bombs manufactured of gaspipe and high explosives over the miners’ land, but that no one was injured. One of the bombs, he reports, fell between two women who were standing in a yard, but it failed to explode.

“Failed to explode” is better than the alternative, but well, it’s the thought that counts.

Most strikes were not accompanied by attempted war crimes, but that particular strike, which was part of early-twentieth-century America’s aptly named Coal Wars, happened during a situation associated with increased danger — unionization attempts.

Looked at in strictly economic terms, this isn’t so surprising. From the company’s perspective, beyond unionization lies a huge unknown. Formerly direct decisions will have to run through a new and potentially antagonistic committee. The company will have less flexibility about wages and layoffs in case of an economic downturn. They may become less competitive with a nonunion entity. They may have to renegotiate every single employee contract.

Whether or not a union would be good per se in a space settlement, given how costly and hazardous any kind of strife would be, you may want to begin your space settlement with some sort of collective bargaining entity purely to avoid a dangerous transition. A union would also reduce some of the power imbalance by giving workers the ability to act collectively in their own interest. However, this may not happen in reality if the major space capitalists of today are the space company-town bosses of the future—both Elon Musk and Jeff Bezos kept their companies ununionized while CEOs.

Economic Chaos

Another basic problem here is that company towns, being generally oriented around a single good, are extremely vulnerable to economic randomness. Several scholars have noted that company towns tend to be less prone to strife when they have fatter margins. It’s no coincidence that the pipe-bomb incident above came about during a serious drop in the price of coal early in the twentieth century. Price drops and general bad economic conditions can mean renegotiations of contracts in an environment where the company fears for its survival. Things can get nasty.

If Muskow makes its money on tourism, it might lose out when Apple opens a slightly cooler Mars resort two lava tubes over. Or there could be another Great Depression on Earth, limiting the desire for costly space vacations. So what’s a space CEO to do? In terrestrial company towns, if a Great Depression shows up, one option is for the town to just fold. It’s not a fun option, but at least there’s a train out of town or a chance to hitchhike. Mars has a once-every-two-years launch window.* Even a trip to Earth from the Moon requires a 380,000-kilometer shot in a rocket, which will likely never be cheap.

The biggest rockets on the drawing board today could perhaps transport a hundred people at a time. Even for a settlement of only ten thousand people, that’s a lot of transport infrastructure in case the town needs to be evacuated. Throw in that, at least right now, we don’t even know if people born and raised on the Moon or Mars can physiologically handle coming “back” to Earth, and, well, things get interesting.

The result is that there is a huge ethical onus on whoever’s setting this thing up. Not just to have a huge reserve of funding and supplies and transportation, so that people can be saved or evacuated if need be, but also to do the science in advance to determine if it’s even possible to bring home people born in partial Earth gravity.

There is some precedent for governments being willing to prop up company towns. Many old Soviet monotowns now receive economic aid from the Russian government. We should note, however, that keeping a small Russian village on life support will be a lot cheaper than maintaining an armada of megarockets for supplies and transportation.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-city-on-mars-kelly-and-zach-weinersmith-penguin-153023805.html?src=rss

Amazon is trialing a $10 monthly grocery subscription for Prime members

Amazon regularly makes changes to the costs and availability of its groceries. The company's latest attempt to drive more business comes in the form of a subscription plan. Prime members can now pay $10 monthly for unlimited free Amazon Fresh and Whole Foods deliveries on orders over $35. The subscription also includes unlimited 30-minute pickup orders, regardless of the amount spent. 

The addition of Whole Foods is notable as any delivery from the retailer through Amazon has included a $10 delivery fee since 2021. However, Amazon Fresh grocery deliveries over $35 were already free for Prime members until early this year when Amazon added a fee for grocery deliveries under $150. The company reduced that threshold to $100 in October, with a $7 fee on Fresh orders of $50 to $100 and a $10 fee on orders below $50. So, basically, Prime members can now pay $10 each month to have the same deal they had up until January — plus the $15 monthly or $139 annually to be a Prime member.

The addition of a subscription plan follows other attempts from Amazon to drum up business for its grocery sector. In November, Amazon expanded Fresh grocery delivery and pickup to anyone, not just Prime members. Amazon is starting small with its newest offer, rolling out the subscription plan to Prime members in three US cities: Sacramento, CA, Columbus, OH and Denver, Colorado. 

This article originally appeared on Engadget at https://www.engadget.com/amazon-is-trialing-a-10-monthly-grocery-subscription-for-prime-members-123559940.html?src=rss

The Morning After: Google’s Gemini is the company’s answer to ChatGPT

Google officially introduced its most capable large language model to date, Gemini. CEO Sundar Pichai said it’s the first of “a new generation of AI models, inspired by the way people understand and interact with the world.” Of course, it’s all very complex, but Google’s multimillion-dollar investment in AI has created a model more flexible than anything before it. Let’s break it down.

The system has been developed from the ground up as an integrated multimodal AI. As Engadget’s Andrew Tarantola puts it, “think of many foundational AI models as groups of smaller models all stacked together.” Gemini is trained to seamlessly understand and reason on all kinds of inputs, and this should make it pretty capable in the face of complex coding requests and even physics problems.

Google

Gemini is being ‘made’ into three sizes: Nano, Pro and Ultra. Nano is on-device, and Pro will fold into Google’s chatbot, Bard. The improved Bard chatbot will be available in the same 170 countries and territories as the existing service. Gemini Pro apparently outscored the earlier model, which initially powered ChatGPT, called GPT-3.5, on six of eight AI benchmarks. However, there are no comparisons yet between OpenAI’s dominant chatbot running on GPT-4 and this new challenger.

Meanwhile, Gemini Ultra, which won’t be available until at least 2024, scored higher than any other model, including GPT-4 on some benchmark tests. However, this Ultra flavor reportedly requires additional testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for further testing and feedback.

— Mat Smith

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!​​

The biggest stories you might have missed

A new report says ‘the world is on a disastrous trajectory,’ due to climate change

Google’s Gemini AI is coming to Android

The best travel gifts

How to use Personal Voice on iPhone with iOS 17

Half of London’s famous black cab fleet are now EVs

AMD’s Ryzen 8040 chips remind Intel it’s falling behind in AI PCs

Could MEMS be the next big leap in headphone technology?

The first affordable headphones with MEMS drivers have arrived

Creative’s Aurvana Ace line brings new speaker technology to the mainstream.

Engadget

The headphone industry isn’t known for its rapid evolution, which makes the arrival of the Creative’s Aurvana Ace headphones — the first wireless buds with MEMS drivers — notable. MEMS-based headphones need a small amount of “bias” power to work and while Singularity used a dedicated DAC with a specific xMEMS “mode,” Creative uses an amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. If MEMS is to catch on, it has to be compatible with true wireless headphones.

Continue reading.

Apple and Google are probably spying on your push notifications

But the DOJ won’t let them fess up.

Foreign governments likely spy on your smartphone use, and now Senator Ron Wyden’s office is pushing for Apple and Google to reveal how exactly that works. Push notifications, the dings you get from apps calling your attention back to your phone, may be handed over from a company to government services if asked.

“Because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information,” Wyden wrote in the letter on Wednesday.

Apple claims it was suppressed from coming clean about this process, which is why Wyden’s letter specifically targets the Department of Justice. “In this case, the federal government prohibited us from sharing any information, and now this method has become public, we are updating our transparency reporting to detail these kinds of request,” Apple said in a statement to Engadget. Meanwhile, Google said it shared “the Senator’s commitment to keeping users informed about these requests.”

Continue reading.

Researchers develop under-the-skin implant to treat Type 1 diabetes

The device can secrete insulin to cells.

Scientists have developed a new implantable device that could change the way Type 1 diabetics receive insulin. The thread-like implant, or SHEATH (Subcutaneous Host-Enabled Alginate THread), is installed in a two-step process, which ultimately leads to the deployment of “islet devices,” derived from the cells that produce insulin in our bodies naturally. A 10-centimeter-long islet device secretes insulin through islet cells that form around it, while also receiving nutrients and oxygen from blood vessels to stay alive. Because the islet devices eventually need to be removed, the researchers are still working on ways to maximize the exchange of nutrients and oxygen in large-animal models — and eventually patients.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-googles-gemini-is-the-companys-answer-to-chatgpt-121531424.html?src=rss

Meta’s Oversight Board is fast-tracking two cases about Israel-Hamas war content

Meta’s Oversight Board says it will fast-track two cases dealing with content takedowns on Facebook and Instagram related to the ongoing Israel-Hamas war. The cases mark the first time the independent board has opted to expedite a review, which allows it to make a decision in as little as 48 hours instead of the typical weeks or months-long process.

The group says it has seen a surge in appeals since the start of the conflict with “an almost three-fold increase in the daily average of appeals” related to the Middle East and North Africa. The board said it selected the two cases, one from Facebook and one from Instagram, because they “address important questions relating to the conflict and represent wider issues affecting Facebook and Instagram users.”

In both cases, Meta initially removed the posts but later restored them. The case originating from Instagram stems from an early November post “showing what appears to be the aftermath of an airstrike on a yard outside Al-Shifa Hospital in Gaza City.” Meta had taken down the post, citing its rules against violent content, but restored the post with a warning screen after the Oversight Board agreed to consider the case.

The case from Facebook deals with a video of Israeli hostages filmed during the October 7 attacks in Israel. Meta removed the video, citing its dangerous organization and violence and incitement policy. According to the Oversight Board, Meta later “revised its policy guidance in response to trends in how hostage kidnapping videos were being shared and reported on,” following the October 7 attacks.

The Oversight Board said in a statement it expects to make decisions about the cases within 30 days. As with other Oversight Board cases, Meta is required to comply with the board’s decision regarding whether the appealed content should be allowed to remain on its platform. The board will also make a series of policy recommendations to the company, though Meta isn’t bound to implement those changes.

Still, the board’s recommendations in these cases will likely be watched closely as Meta has faced increased scrutiny for its content moderation decisions since the start of the conflict. The company attempted to dispel accusations that it had “shadowbanned” Instagram users for sharing posts about the conditions in Gaza. Meta later blamed some of the issues on an unspecified “bug.”

The Oversight Board has previously raised questions about the company’s handling of content related to conflicts between Israel and Hamas. Last year, an independent report, commissioned by Meta following a recommendation from the board, found discrepancies in the company’s moderation practices that violated Palestinians’ right to free expression in 2021. In response to the report, Meta said it would update several of its rules, including its Dangerous Organizations and Individuals policy.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-is-fast-tracking-two-cases-about-israel-hamas-war-content-110028027.html?src=rss

The first affordable headphones with MEMS drivers don't disappoint

The headphone industry isn’t known for its rapid evolution. There are developments like spatial sound and steady advances in Bluetooth audio fidelity, but for the most part, the industry counts advances in decades rather than years. That makes the arrival of the Aurvana Ace headphones — the first wireless buds with MEMS drivers — quite the rare event. I recently wrote about what exactly MEMS technology is and why it matters, but Creative is the first consumer brand to sell a product that uses it.

Creative unveiled two models, the Aurvana Ace ($130) and the Aurvana Ace 2 ($150) in tandem. Both feature MEMS drivers, the main difference is that the Ace model supports high-resolution aptX Adaptive while the Ace 2 has top-of-the-line aptX Lossless (sometimes marketed as “CD quality”). The Ace 2 is the model we’ll be referring to from here on.

In fairness to Creative, just the inclusion of MEMS drivers alone would be a unique selling point, but the aforementioned aptX support adds another layer of HiFi credentials to the mix. Then there’s adaptive ANC and other details like wireless charging that give the Ace 2 a strong spec-sheet for the price. Some obvious omissions include small quality of life features like pausing playback if you remove a bud and audio personalization. Those could have been two easy wins that would make both models fairly hard to beat for the price in terms of features if nothing else.

Photo by James Trew / Engadget

When I tested the first ever xMEMS-powered in-ear monitors, the Singularity Oni, the extra detail in the high end was instantly obvious, especially in genres like metal and drum & bass. The lower frequencies were more of a challenge, with xMEMS, the company behind the drivers in both the Oni and the Aurvana, conceding that a hybrid setup with a conventional bass driver might be the preferred option until its own speakers can handle more bass. That’s exactly what we have here in the Aurvana Ace 2.

The key difference between the Aurvana Ace 2 and the Oni though is more important than a good low end thump (if that’s even possible). MEMS-based headphones need a small amount of “bias” power to work, this doesn’t impact battery life, but Singularity used a dedicated DAC with a specific xMEMS “mode.” Creative uses a specific amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. The popularity of true wireless (TWS) headphones these days means that if MEMS is to catch on, it has to be compatible.

The good news is that even without the expensive iFi DAC that the Singularity Oni IEMs required to work, the Aurvana Ace 2 bring extra clarity in the higher frequencies than rival products at this price. That’s to say, even with improved bass, the MEMS drivers clearly favor the mid- to high-end frequencies. The result is a sound that strikes a good balance between detail and body.

Listening to “Master of Puppets” the iconic chords had better presence and “crunch” than on a $250 pair of on-ear headphones I tried. Likewise, the aggressive snares in System of a Down’s “Chop Suey!” pop right through just as you’d hope. When I listened to the same song on the $200 Grell Audio TWS/1 with personalized audio activated the sounds were actually comparable. Just Creative’s sounded like that out of the box, but the Grell buds have slightly better dynamic range over all and more emphasis on the vocals.

For more electronic genres the Aurvana Ace’s hybrid setup really comes into play. Listening to Dead Prez’s “Hip-Hop” really shows off the bass capabilities, with more oomph here than both the Grell and a pair of $160 House of Marley Redemption 2 ANC — but it never felt overdone or fuzzy/loose.

Photo by James Trew / Engadget

Despite besting other headphones on specific like-for-like comparisons, as a whole the nuances and differences between the headphones is harder to quantify. The only set I tested that sounded consistently better, to me, was the Denon Perl Pro (formerly known as the NuraTrue Pro) but at $349 those are also the most expensive.

It would be remiss of me not to point out that there were also many songs and tests where differences between the various sets of earbuds were much harder to discern. With two iPhones, one Spotify account and a lot of swapping between headphones during the same song it’s possible to tease out small preferences between different sets, but the form factor, consumer preference and price point dictate that, to some extent, they all broadly overlap sonically.

The promise of MEMS drivers isn’t just about fidelity though. The claim is that the lack of moving parts and their semiconductor-like fabrication process ensures a higher level of consistency with less need for calibration and tuning. The end result being a more reliable production process which should mean lower cost. In turn this could translate into better value for money or at least a potentially more durable product. If the companies choose to pass that saving on of course.

For now, we’ll have to wait and see if other companies explore using MEMS drivers in their own products or whether it might remain an alternative option alongside technology like planar magnetic drivers and electrostatic headphones as specialist options for enthusiasts. One thing’s for sure: Creative’s Aurvana Ace series offers a great audio experience alongside premium features like wireless charging and aptX Lossless for a reasonable price — what’s not to like about that?

This article originally appeared on Engadget at https://www.engadget.com/the-first-affordable-headphones-with-mems-drivers-review-161536317.html?src=rss

Google's answer to GPT-4 is Gemini: 'the most capable model we’ve ever built'

OpenAI's spot atop the generative AI heap may be coming to an end as Google officially introduced its most capable large language model to date on Wednesday, dubbed Gemini 1.0. It's the first of “a new generation of AI models, inspired by the way people understand and interact with the world,” CEO Sundar Pichai wrote in a Google blog post.

“Ever since programming AI for computer games as a teenager, and throughout my years as a neuroscience researcher trying to understand the workings of the brain, I’ve always believed that if we could build smarter machines, we could harness them to benefit humanity in incredible ways,” Pichai continued.

The result of extensive collaboration between Google’s DeepMind and Research divisions, Gemini has all the bells and whistles cutting-edge genAIs have to offer. "Its capabilities are state-of-the-art in nearly every domain," Pichai declared. 

The system has been developed from the ground up as an integrated multimodal AI. Many foundational models can be essentially though of groups of smaller models all stacked in a trench coat, with each individual model trained to perform its specific function as a part of the larger whole. That’s all well and good for shallow functions like describing images but not so much for complex reasoning tasks.

Google, conversely, pre-trained and fine-tuned Gemini, “from the start on different modalities” allowing it to “seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models,” Pichai said. Being able to take in all these forms of data at once should help Gemini provide better responses on more challenging subjects, like physics.

Gemini can code as well. It’s reportedly proficient in popular programming languages including Python, Java, C++ and Go. Google has even leveraged a specialized version of Gemini to create AlphaCode 2, a successor to last year's competition-winning generativeAI. According to the company, AlphaCode 2 solved twice as many challenge questions as its predecessor did, which would put its performance above an estimated 85 percent of the previous competition’s participants.

While Google did not immediately share the number of parameters that Gemini can utilize, the company did tout the model’s operational flexibility and ability to work in form factors from large data centers to local mobile devices. To accomplish this transformational feat, Gemini is being made available in three sizes: Nano, Pro and Ultra. 

Nano, unsurprisingly, is the smallest of the trio and designed primarily for on-device tasks. Pro is the next step up, a more versatile offering than Nano, and will soon be getting integrated into many of Google’s existing products, including Bard.

Starting Wednesday, Bard will begin using a especially-tuned version of Pro that Google promises will offer “more advanced reasoning, planning, understanding and more.” The improved Bard chatbot will be available in the same 170 countries and territories that regular Bard currently is, and the company reportedly plans to expand the new version's availability as we move through 2024. Next year, with the arrival of Gemini Ultra, Google will also introduce Bard Advanced, an even beefier AI with added features.

Pro’s capabilities will also be accessible via API calls through Google AI Studio or Google Cloud Vertex AI. Search (specifically SGE), Ads, Chrome and Duet AI will also see Gemini functionality integrated into their features in the coming months.

Gemini Ultra won’t be available until at least 2024, as it reportedly requires additional red-team testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for testing and feedback.” But when it does arrive, Ultra promises to be an incredibly powerful for further AI development.

This article originally appeared on Engadget at https://www.engadget.com/googles-answer-to-gpt-4-is-gemini-the-most-capable-model-weve-ever-built-150039571.html?src=rss

Meta is disconnecting Messenger and Instagram chat later this month

Meta will soon remove a feature that lets you chat with Facebook friends on Instagram. Starting mid-December, the company will disconnect the cross-platform integration, which it added in 2020. It didn’t provide a reason for doing so, but, as 9to5Google speculates, avoiding regulatory consequences in the EU sounds like a logical motive.

Announced in 2019, the optional cross-platform integration went live a year later, blurring the lines between two of the company’s most popular services. “Just like today you could talk to a Gmail account if you have a Yahoo account, these accounts will be able to talk to each other through the shared protocol that is Messenger,” Messenger VP Loredena Crisan said at the time.

Meta says once “mid-December 2023” rolls around, you’ll no longer be able to start new chats or calls with Facebook friends from Instagram. If you have any existing conversations with Facebook accounts on Instagram, they’ll become read-only. In addition, Facebook accounts will no longer be able to see your activity status or view read receipts. Finally, any existing chats with Facebook accounts won’t move to your inbox on either platform.

The EU designed its landmark Digital Markets Act, passed in 2022, as a deterrent against platform holders from gaining monopoly power (or something close to it). If a company passes a revenue threshold and the European Commission deems the platform overly dominant, it can dole out a maximum penalty of 10 percent of its total global turnover from the previous year. Given the enforcement “stick” this provides the governing body, perhaps Meta saw the writing on the wall and deemed the Instagram / Facebook cross-messaging feature not worth the risk.

This article originally appeared on Engadget at https://www.engadget.com/meta-is-disconnecting-messenger-and-instagram-chat-later-this-month-205956880.html?src=rss

Huawei is allegedly building a self-sufficient chip network using state investment fund

We've seen Huawei's surprising strides with its recent smartphones — especially the in-house 7nm 5G processor within, but apparently the company has been working on something far more significant to bypass the US import ban. According to a new Bloomberg investigation, a Shenzhen city government investment fund created in 2019 has been helping Huawei build "a self-sufficient chip network." 

Such a network would give the tech giant access to enterprises — most notably, the three subsidiaries under a firm called SiCarrier — that are key to developing lithography machines. Lithography, especially the high-end extreme ultraviolet flavor, would usually have to be imported into China, but it's currently restricted by US, Netherlands and Japan sanctions. Huawei apparently went as far as transferring "about a dozen patents to SiCarrier," as well as letting SiCarrier's elite engineers work directly on its sites, which suggests the two firms have a close symbiotic relationship.

Bloomberg's source claims that Huawei has hired several former employees of Dutch lithography specialist, ASML, to work on this breakthrough. The result so far is allegedly the 7nm HiSilicon Kirin 9000S processor fabricated locally by SMIC (Semiconductor Manufacturing International Corporation), which is said to be about five years behind the leading competition (say, Apple Silicon's 3nm process) — as opposed to an eight-year gap intended by the Biden administration's export ban.

Huawei's Mate 60, Mate 60 Pro, Mate 60 Pro+ and Mate X5 foldable all feature this HiSilicon chip, as well as other Chinese components like display panels (BOE), camera modules (OFILM) and batteries (Sunwoda). Huawei having its own network of local enterprises would eventually allow it to rely less on imported components, and potentially even become the halo of the Chinese chip industry — especially in the age of electric vehicles and AI, where more chips are needed than ever (as much as NVIDIA would like to deal with China). That said, Huawei apparently denied that it had been receiving government help to achieve this goal.

Given Huawei's seeming progress, and the fact that China has been pumping billions into its chip industry, the US government will just have to try harder.

This article originally appeared on Engadget at https://www.engadget.com/huawei-is-allegedly-building-a-self-sufficient-chip-network-using-state-investment-fund-051823202.html?src=rss

Google Messages now lets you choose your own chat bubble colors

Google is rolling out a string of updates for the Messages app, including the ability to customize the colors of the text bubbles and backgrounds. So, if you really want to, you can have blue bubbles in your Android messaging app. You can have a different color for each chat, which could help prevent you from accidentally leaking a secret to family or friends.

With the help of on-device Google AI (meaning you'll likely need a recent Pixel device to use this feature), you can transform photos into reactions with Photomoji. All you need to do is pick a photo, decide which object (or person or animal) you'd like to turn into a Photomoji and hit the send button. These reactions will be saved for later use, and friends in the chat can use any Photomoji you send them as well.

The new Voice Moods feature allows you to apply one of nine different vibes to a voice message, by showing visual effects such as heart-eye emoji, fireballs (for when you're furious) and a party popper. Google says it has also upgraded the quality of voice messages by bumping up the bitrate and sampling rate.

In addition, there are more than 15 Screen Effects you can trigger by typing things like "It's snowing" or "I love you." These will make "your screen erupt in a symphony of colors and motion," Google says. Elsewhere, Messages will display animated effects when certain reactions and emoji are used.

Google

On top of all of that, users will now be able to set up a profile that appends their name and photo to their phone number to help them have more control over how they appear across Google services. The company says this feature could help when it comes to receiving messages from a phone number that isn't in your group chats. It could help you know the identity of everyone in a group chat too.

Some of these features will be available in beta starting today in the latest version of Google Messages. Google notes that some feature availability will depend on market and device.

Google is rolling out these updates alongside the news that more than a billion people now use Google Messages with RCS enabled every month. RCS (Rich Communication Services) is a more feature-filled and secure format of messaging than SMS and MMS. It supports features such as read receipts, typing indicators, group chats and high-res media. Google also offers end-to-end encryption for one-on-one and group conversations via RCS.

For years, Google had been trying to get Apple to adopt RCS for improved interoperability between Android and iOS. Apple refused, perhaps because iMessage (and its blue bubbles) have long been a status symbol for its users. However, likely to ensure Apple falls in line with European Union regulations, Apple has relented. The company recently said it would start supporting RCS in 2024.

This article originally appeared on Engadget at https://www.engadget.com/google-messages-now-lets-you-choose-your-own-chat-bubble-colors-170042264.html?src=rss

How OpenAI's ChatGPT has changed the world in just a year

Over the course of two months from its debut in November 2022, ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the Internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, and been shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government.

In short ChatGPT is just about everywhere. Few industries have remained untouched by the viral adoption of the generative AI’s tools. On the first anniversary of its release, let’s take a look back on the year of ChatGPT that brought us here.

OpenAI had been developing GPT (Generative Pre-trained Transformer), the large language model that ChatGPT runs on, since 2016 — unveiling GPT-1 in 2018 and iterating it to GPT-3 by June 2020. With the November 30, 2022 release of GPT-3.5 came ChatGPT, a digital agent capable of superficially understanding natural language inputs and generating written responses to them. Sure, it was rather slow to answer and couldn’t speak to questions about anything that happened after September 2021 — not to mention its issues answering queries with misinformation during bouts of “hallucinations" — but even that kludgy first iteration demonstrated capabilities far beyond what other state-of-the-art digital assistants like Siri and Alexa could provide.

ChatGPT’s release timing couldn’t have been better. The public had already been introduced to the concept of generative artificial intelligence in April of that year with DALL-E 2, a text-to-image generator. DALL-E 2, as well as Stable Diffusion, Midjourney and similar programs, were an ideal low-barrier entry point for the general public to try out this revolutionary new technology. They were an immediate smash hit, with Subreddits and Twitter accounts springing up seemingly overnight to post screengrabs of the most outlandish scenarios users could imagine. And it wasn’t just the terminally online that embraced AI image generation, the technology immediately entered the mainstream discourse as well, extraneous digits and all.

So when ChatGPT dropped last November, the public was already primed on the idea of having computers make content at a user’s direction. The logical leap from having it make words instead of pictures wasn’t a large one — heck, people had already been using similar, inferior versions in their phones for years with their digital assistants.

Q1: [Hyping intensifies]

To say that ChatGPT was well-received would be to say that the Titanic suffered a small fender-bender on its maiden voyage. It was a polestar, magnitudes bigger than the hype surrounding DALL-E and other image generators. People flat out lost their minds over the new AI and its CEO, Sam Altman. Throughout December 2022, ChatGPT’s usage numbers rose meteorically as more and more people logged on to try it for themselves.

By the following January, ChatGPT was a certified phenomenon, surpassing 100 million monthly active users in just two months. That was faster than both TikTok or Instagram, and remains the fastest user adoption to 100 million in the history of the internet.

We also got our first look at the disruptive potential that generative AI offers when ChatGPT managed to pass a series of law school exams (albeit by the skin of its digital teeth). Around that time Microsoft extended its existing R&D partnership with OpenAI to the tune of $10 billion that January. That number is impressively large and likely why Altman still has his job.

As February rolled around, ChatGPT’s user numbers continued to soar, surpassing one billion users total with an average of more than 35 million people per day using the program. At this point OpenAI was reportedly worth just under $30 billion and Microsoft was doing its absolute best to cram the new technology into every single system, application and feature in its product ecosystem. ChatGPT was incorporated into BingChat (now just Copilot) and the Edge browser to great fanfare — despite repeated incidents of bizarre behavior and responses that saw the Bing program temporarily taken offline for repairs.

Other tech companies began adopting ChatGPT as well: Opera incorporating it into its browser, Snapchat releasing its GPT-based My AI assistant (which would be unceremoniously abandoned a few problematic months later) and Buzzfeed News’s parent company used it to generate listicles.

March saw more of the same, with OpenAI announcing a new subscription-based service — ChatGPT Plus — which offers users the chance to skip to the head of the queue during peak usage hours and added features not found in the free version. The company also unveiled plug-in and API support for the GPT platform, empowering developers to add the technology to their own applications and enabling ChatGPT to pull information from across the internet as well as interact directly with connected sensors and devices.

ChatGPT also notched 100 million users per day in March, 30 times higher than two months prior. Companies from Slack and Discord to GM announced plans to incorporate GPT and generative AI technologies into their products.

Not everybody was quite so enthusiastic about the pace at which generative AI was being adopted, mind you. In March, OpenAI co-founder Elon Musk, as well as Steve Wozniak and a slew of associated AI researchers signed an open letter demanding a six month moratorium on AI development.

Q2: Electric Boog-AI-loo

Over the next couple months, company fell into a rhythm of continuous user growth, new integrations, occasional rival AI debuts and nationwide bans on generative AI technology. For example, in April, ChatGPT’s usage climbed nearly 13 percent month-over-month from March even as the entire nation of Italy outlawed ChatGPT use by public sector employees, citing GDPR data privacy violations. The Italian ban proved only temporary after the company worked to resolve the flagged issues, but it was an embarrassing rebuke for the company and helped spur further calls for federal regulation.

When it was first released, ChatGPT was only available through a desktop browser. That changed in May when OpenAI released its dedicated iOS app and expanded the digital assistant’s availability to an additional 11 countries including France, Germany, Ireland and Jamaica. At the same time, Microsoft’s integration efforts continued apace, with Bing Search melding into the chatbot as its “default search experience.” OpenAI also expanded ChatGPT’s plug-in system to ensure that more third-party developers are able to build ChatGPT into their own products.

ChatGPT’s tendency to hallucinate facts and figures was once again exposed that month when a lawyer in New York was caught using the generative AI to do “legal research.” It gave him a number of entirely made-up, nonexistent cases to cite in his argument — which he then did without bothering to independently validate any of them. The judge was not amused.

By June, a little bit of ChatGPT’s shine had started to wear off. Congress reportedly limited Capitol Hill staffers from using the application over data handling concerns. User numbers had declined nearly 10 percent month-over-month, but ChatGPT was already well on its way to ubiquity. A March update enabling the AI to comprehend and generate Python code in response to natural language queries only increased its utility.

Q3: [Pushback intensifies]

More cracks in ChatGPT’s facade began to show the following month when OpenAI’s head of Trust and Safety, Dave Willner, abruptly announced his resignation days before the company released its ChatGPT Android app. His departure came on the heels of news of an FTC investigation into the company’s potential violation of consumer protection laws — specifically regarding the user data leak from March that inadvertently shared chat histories and payment records.

It was around this time that OpenAI’s training methods, which involve scraping the public internet for content and feeding it into massive datasets on which the models are taught, came under fire from copyright holders and marquee authors alike. Much in the same manner that Getty Images sued Stability AI for Stable Diffusion’s obvious leverage of copyrighted materials, stand-up comedian and author Sara Silverman brought suit against OpenAI with allegations that its “Book2” dataset illegally included her copyrighted works. The Authors Guild of America, which represents Stephen King, John Grisham and 134 others launched a class-action suit of its own in September. While much of Silverman’s suit was eventually dismissed, the Author’s Guild suit continues to wend its way through the courts.

Select news outlets, on the other hand, proved far more amenable. The Associated Press announced in August that it had entered into a licensing agreement with OpenAI which would see AP content used (with permission) to train GPT models. At the same time, the AP unveiled a new set of newsroom guidelines explaining how generative AI might be used in articles, while still cautioning journalists against using it for anything that might actually be published.

ChatGPT itself didn’t seem too inclined to follow the rules. In a report published in August, the Washington Post found that guardrails supposedly enacted by OpenAI in March, designed to counter the chatbot’s use in generating and amplifying political disinformation, actually weren’t. The company told Semafor in April that it was "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying." Per the Post, those rules simply were not enforced, with the system eagerly returning responses for prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

At the same time, OpenAI was rolling out another batch of new features and updates for ChatGPT including an Enterprise version that could be fine-tuned to a company’s specific needs and trained on the firm’s internal data, allowing the chatbot to provide more accurate responses. Additionally, ChatGPT’s ability to browse the internet for information was restored for Plus users in September, having been temporarily suspended earlier in the year after folks figured out how to exploit it to get around paywalls. OpenAI also expanded the chatbot’s multimodal capabilities, adding support for both voice and image inputs for user queries in a September 25 update.

Q4: Starring Sam Altman as “Lazarus”

The fourth quarter of 2023 has been a hell of a decade for OpenAI. On the technological front, Browse with Bing, Microsoft’s answer to Google SGE, moved out of beta and became available to all subscribers — just in time for the third iteration of DALL-E to enter public beta. Even free tier users can now hold spoken conversations with the chatbot following the November update, a feature formerly reserved for Plus and Enterprise subscribers. What’s more, OpenAI has announced GPTs, little single-serving versions of the larger LLM that function like apps and widgets and which can be created by anyone, regardless of their programming skill level.

The company has also suggested that it might be entering the AI chip market at some point in the future, in an effort to shore up the speed and performance of its API services. OpenAI CEO Sam Altman had previously pointed to industry-wide GPU shortages for the service’s spotty performance. Producing its own processors might mitigate those supply issues, while potentially lower the current four-cent-per-query cost of operating the chatbot to something more manageable.

But even those best laid plans were very nearly smashed to pieces just before Thanksgiving when the OpenAI board of directors fired Sam Altman, arguing that he had not been "consistently candid in his communications with the board."

That firing didn't take. Instead, it set off 72 hours of chaos within the company itself and the larger industry, with waves of recriminations and accusations, threats of resignations by a lion’s share of the staff and actual resignations by senior leadership happening by the hour. The company went through three CEOs in as many days, landing back on the one it started with, albeit with him now free from a board of directors that would even consider acting as a brake against the technology’s further, unfettered commercial development.

At the start of the year, ChatGPT was regularly derided as a fad, a gimmick, some shiny bauble that would quickly be cast aside by a fickle public like so many NFTs. Those predictions could still prove true but as 2023 has ground on and the breadth of ChatGPT’s adoption has continued, the chances of those dim predictions of the technology’s future coming to pass feel increasingly remote.

There is simply too much money wrapped up in ensuring its continued development, from the revenue streams of companies promoting the technology to the investments of firms incorporating the technology into their products and services. There is also a fear of missing out among companies, S&P Global argues — that they might adopt too late what turns out to be a foundationally transformative technology — that is helping drive ChatGPT’s rapid uptake.

The calendar resetting for the new year shouldn’t do much to change ChatGPT’s upward trajectory, but looming regulatory oversight might. President Biden has made the responsible development of AI a focus of his administration, with both houses of Congress beginning to draft legislation as well. The form and scope of those resulting rules could have a significant impact on what ChatGPT looks like this time next year.

This article originally appeared on Engadget at https://www.engadget.com/how-openais-chatgpt-has-changed-the-world-in-just-a-year-140050053.html?src=rss