Google Translate is getting an AI-powered upgrade in the coming weeks to help you find more accurate translations, particularly for words with multiple definitions. The app will offer additional contextual translation options with descriptions and examples.
Let's say you're looking for a translation of the word "row," which has multiple meanings in English. It could refer to an argument, a line of seats on a plane or using an oar to propel a boat. Google Translate should soon offer translations for all of those variants, along with examples of how they're used.
Google
Google says the app will provide "the context you need to accurately translate and use the right turns of phrase, local idioms or appropriate words depending on your intent." All going well, this should help you communicate more clearly in different languages. The upgraded contextual translations will be available for languages such as English, French, German, Japanese and Spanish starting this month, with more surely to follow.
Meanwhile, the company recently rolled out a Translate redesign on Android. It's coming to the iOS app soon. The revamped app introduces more gestures that should make it easier to use, including swifter access to language selection and the option to swipe to bring up recent translations. Google says translations are now more readable, while an extra 33 languages are available for on-device translation, including Basque, Hawaiian, Hmong, Kurdish, Sundanese, Yiddish and Zulu.
TikTok is, once again, facing an uncertain future. The company has spent the last two years quietly negotiating with US government officials in order to avoid an outright ban. But that process has now stalled, and calls for a ban have only intensified.
Next month, TikTok CEO Shou Zi Chew will testify at a House Energy and Commerce Committee hearing, his first Congressional appearance. Many lawmakers have called for a more sweeping ban, and will likely quiz Chew about TikTok’s alleged risks to national security, and its parent company’s Chinese ownership.
TikTok has long denied that it’s a threat, and downplays its ties to China. But now the company is also trying a new tactic to prove it has nothing to hide: its Transparency and Accountability Center. The company first introduced the idea in 2020, but the actual facility didn’t open until recently due to COVID-related delays. Last week, the company took a handful of reporters on a tour of the center as part of a new charm offensive as it tries to fend off regulators and the looming prospect of more bans in the United States.
Karissa Bell / Engadget
The first thing you notice when you walk in is that, despite being dedicated to “transparency,” there are no windows in the space, which is housed in an office park near TikTok’s Culver City US HQ. Instead, visitors are greeted with neon-lit signs and big, interactive displays dedicated to explaining various aspects of the app.
The company hopes visitors will walk away with a better understanding of how the app operates and, perhaps, less suspicion. “We really do understand the critique that big media, big tech, plays as it relates to how algorithms work, how moderation policies work and the data flows of the systems,” says TikTok COO Vanessa Pappas. “A lot of these are unprecedented levels of transparency that we're providing.”
What you’ll actually learn by touring the center, though, largely depends on how much you already know about TikTok when you walk in the door. It’s primarily dedicated to explaining the app’s content moderation policies, and how it handles recommendations, both of which have been heavily scrutinized.
There are two interactive exhibits: a “moderation station,” where visitors can play the role of a TikTok content moderator, and another room that’s meant to “demystify” the app’s vaunted recommendation algorithm.
In the moderation room, you can watch sample videos — presented in an interface similar to what TikTok’s actual content moderators see — and try your hand at judging which ones violate the app’s rules. Meanwhile, the room next door is dedicated to “the algorithm.” It’s more of an illustrated FAQ that offers fairly broad explanations to high-level questions about how the app recommends content. The content is more detailed than TikTok’s extremely vague in-app explanations, but that’s not saying much. For example, under the heading “What information does TikTok use to create personalized experiences?” it explains that users’ interactions with content are tracked to inform the underlying recommendation model. That might be useful info if you know nothing about how algorithms work, but it doesn’t tell you very much about TikTok.
Each explanation is also accompanied by a visualization and a snippet of “simulated code” — the company tightly controls who can view the app’s actual source code — to illustrate what’s happening at various stages of the recommendations process. But again, this felt like it was more designed for people who know nothing about TikTok rather than those who are trying to understand the nuances of its algorithm. There is a space at the transparency center, a server room behind a neon "LATC" sign, where auditors can enter and — after heavy security — dig into TikTok's actual source code. But the vast majority of visitors to the center will never make into that room.
Overall, I can see how the tour might be a worthwhile exercise for lawmakers, who too often show they know shockingly little about how the internet works. But it also feels a bit performative, and I can’t help but remember Facebook’s infamous “war room” tour, when it invited reporters to visit a conference room dedicated to safeguarding elections only to shut it down a month later.
To be clear, TikTok does intend for the transparency center to be a permanent fixture. And the company plans to open more of them in other locations around the world. But while these facilities may help Boomer lawmakers and regulators understand what TikTok is, I’m not sure they will be able to dispel the perception that there's something else, something more secretive, going on within the company. It’s one thing to illustrate how TikTok’s algorithm works at a high level, but it’s another to prove that something isn’t happening.
Karissa Bell / Engadget
It’s notable, then, that TikTok’s Transparency Center doesn’t address some of the biggest concerns that have been raised about TikToK: its relationship with parent company Bytedance and whether the Chinese government could somehow take advantage of the relationship to advance its interests. “If you fundamentally distrust the autocratic Chinese government, and how it uses its relationship with large Chinese-based corporations to extend its influence around the world, then all the promises TikTok can pile up are not going to completely allay your anxiety about TikTok,” Paul Barrett, the deputy director of NYU's Stern Center for Business and Human Rights, told Engadget.
TikTok does, however, have a plan to address government concerns that it could be a national security threat. The company has been locked in negotiations with the Committee on Foreign Investment in the United States (CFIUS) for more than two years over its future in the US. And it struck a deal with Oracle last year to safeguard US user data as part of this effort, known as “Project Texas,” to reassure US officials.
Until now, TikTok has been fairly tight-lipped about Project Texas and its dealings with CFIUS. But now that those talks have stalled — despite TikTok claiming it’s addressed every concern raised by regulators — the company has been cautiously sharing more details about its arrangements with Oracle.
Reporters who attended the tour were given an overview of the plan, but were asked not to directly quote the executives who described it.
Central to the plan is a new US subsidiary called TikTok US Data Security (USDS), which will have an independent board of CFIUS-approved directors with national security and cybersecurity backgrounds. On the TikTok side, there will be two executives running the US subsidiary, who will report to the board.
TikTok
Meanwhile, all US user data will be housed within Oracle’s Cloud infrastructure with strict controls to prevent unauthorized access and to keep most data from leaving. (Some data about what US users are doing will inevitably have to leave in order to, for example, allow people to interact with content and users from other countries.) Oracle will also review TikTok’s entire source code, as will a separate, outside auditor. Future app updates will also be inspected by Oracle, which will take over responsibility for sending updates to the app stores. Oracle will also monitor TikTok’s recommendation algorithm and content moderation systems. The US government, via CFIUS, will continue to have visibility and oversight into what USDS is doing on an ongoing basis.
TikTok says they are confident these steps address every issue that’s been raised about what TikTok could potentially be doing. Executives also point out that the company has already dedicated an astonishing amount of money — $1.5 billion — and resources to Project Texas. If all that’s good enough for CFIUS, they say, it should be good enough for Congress.
Whether lawmakers will be satisfied with any scenario that allows TikTok to operate in the United States without being fully divested from ByteDance, though, remains to be seen. “They [TikTok] can make all of these arrangements, and put in place all these safeguards, almost to infinity,” Barrett says. “And it's not clear to me that that would satisfy China hawks in the United States.”
That’s partly because TikTok is a convenient punching bag for lawmakers who want to appear tough on China. But there are also legitimate reasons to be concerned about TikTok. ByteDance recently fired four employees who accessed the personal data of an American journalist who had reported on the company. TikTok also has a history of taking, at best, a heavy handed approach to content moderation that some have equated with censorship favorable to the Chinese government.
According to TikTok, Project Texas will ensure neither scenario can happen again. But the fact that it already has will undoubtedly lead to further questions about just how deep the company’s commitment to transparency and accountability really is.
Last year’s OnePlus 10 Pro is set to be replaced by the OnePlus 11. There won’t be a OnePlus 11 Pro, and there wasn’t a regular OnePlus 10. Things could be more straightforward, but what are you going to do? Barring any spinoffs, this is OnePlus’ flagship phone, the focus of all its attention, development budget and everything else. Leaks meant we knew what the OnePlus 11 would look like long before it was officially unveiled. It has everything we loved about OnePlus in the past: a powerful, high-end processor, a vivid screen and (after a brief diversion) a competitive price tag ($699).
The OnePlus 11 launches the same week as a trio of phones from Samsung, the dominant Android phone player. The 2023 Galaxy S series has phones that are both bigger and smaller, pricier and cheaper, than OnePlus’ newest phone. Fortunately, this new flagship has one trick to stand out from Samsung’s new lineup: incredibly fast 100-watt charging.
Design
Mat Smith/Engadget
OnePlus has made some changes to the design. The company drew inspiration from sports cars (it has collaborated with McLaren in the past) and swiss watches. This apparently led to a unibody slab with a stainless steel camera array. There are some fine details in the camera glass which gives it a little bit of a watch aesthetic, but it’s basically just another giant camera pop-out. The metal bezel protrudes slightly more than the glass, which should help avoid scratches. Initially, I thought it was a little too big and ostentatious, but it’s roughly equivalent to the unit on the iPhone 14 Pro – and the Pixel 7 Pro’s Cyclops camera bar is arguably even flashier.
The OnePlus 11 also answered the pleas of the OnePlus faithful by reintroducing its Alert Slider. If you haven’t seen it on previous phones, it’s a metal slider just above the power button on the right edge that swaps between silent, ring and vibrate modes. OnePlus claimed in previous years that the removal was due to space demands inside the phones and that the slider would make a return, so the company has at least fulfilled that promise to its fans. For the rest of us, I’m not sure we need it. Then again, I’m the kind of person that keeps my phone on silent pretty much all the time.
The phone comes in glossy gray-green (Eternal Green) and sparkling black (Titan Black) color options. I thought the black finish would have a gritty, 3D-printed feel to it, but it’s closer to slate – somehow smooth and grippy at the same time. I don’t understand the physics of it either. Meanwhile, the green version is a lot like last year’s phones. The shiny finish is unfortunately a canvas for all your fingerprints and smudges.
The OnePlus 11 has a gorgeous 6.7-inch 2,048 × 1,080 OLED screen that can reach up to 120Hz refresh rates. Once again, there’s an LTPO panel that can now dip low at 1Hz when the always-on display (AOD) is on. OnePlus claims that, compared with typical 30Hz AODs, OnePlus 11’s 1Hz AOD consumes 30 percent less power. Of course, it’s not using much power to begin with as an AOD, but could lead to a little more battery life in the long run, even if it’s not represented in our typical battery rundown tests. Aside from the upgraded AOD capabilities, this screen is otherwise identical to the OnePlus 10 Pro – which isn’t a bad thing. It’s another area where OnePlus often goes toe-to-toe with the best smartphones out there, despite typically costing hundreds of dollars less.
Camera
Mat Smith/Engadget
The OnePlus 11’s primary camera is a new 50-megapixel sensor, with a f/1.8 aperture. It sounds similar to the OnePlus 10 Pro’s main camera, but uses a bigger 1/1.56-inch sensor. This works alongside a 115-degree ultrawide 48MP camera that pulls double-duty for macro shots.
This time, OnePlus’ flagship has a 32MP telephoto camera, up from a measly 8MP on its predecessor. However, optical zoom tops out at 2x, while the lower-res OnePlus 10 Pro could punch in at up to 3.2x optical zoom. It’s an unusual change to make. Which is the better solution? While I didn’t have last year’s OnePlus 10 Pro to compare directly with the OnePlus 11, the images didn’t seem as muted as I remember. Of course, they weren’t as magnified, but given the higher-resolution, I can always crop in without ruining the results too much. It’s pretty much a manual digital zoom, cropping like this, but you benefit from a better sensor before you crop away the excess megapixels.
Mat Smith/Engadget
The OnePlus 11’s camera array, what it calls its third-generation Hasselblad Camera, is improved, and still comes with some addictive filters for stills and video. However, It doesn’t quite reach the pinnacle of smartphone photography, led by the Galaxy S22 Ultra, iPhone 14 Pro and Pixel 7 Pro, though.
The OnePlus 11 seems to do its best work on landscape and street photography. There’s a Pro mode again, so you can dabble in RAW editing, but I was more than happy with the JPGs. The OnePlus 11 also features an AI Highlight video mode. The phone uses image processing to maintain even levels of exposure when recording video and shifting between areas of different lighting. I tested it out on a sunny afternoon, on a bridge, but I didn’t see too many tangible benefits to overexposure. It works a little better when you’re filming in a mostly dark situation, but, oddly, this is meant to be the standout software feature for this year’s OnePlus flagship – it’s not particularly remarkable.
Performance and battery life
Mat Smith/Engadget
The OnePlus 11 has the de facto top Android processor: the Qualcomm Snapdragon 8 Gen 2. It’s powerful, sure, but the bigger benefits might come through longer battery life. According to the chip maker, its new Adreno GPU can offer up to 45 percent better power efficiency.
OnePlus software doesn’t appear to have changed much since the OnePlus 10 Pro. OnePlus claims its HyperBoost Gaming Engine uses machine learning (and Qualcomm’s latest processor) to balance performance and battery drain. The caveat here is that it’s only compatible with major mobile titles like Genshin Impact. It’s also hard to tell whether it offers a discernible impact on games, when so many other phones are similarly specced and offer a similar experience.
The OnePlus phone series typically offers decent battery life. So it’s no surprise that the OnePlus 11 clocked almost 20 hours on our video rundown test. Although the battery icon seemed a little ‘sticky’ around 100 percent after playing video for a good two hours. The only phones that beat that are the company’s own OnePlus 10T and the iPhone 14 Pro.
While that’s great, the speed that OnePlus 11 can charge is even more impressive. 100W charging is here – the kind of wattage we get with laptops. There’s a compatible charger in the box, thankfully, but it’s proprietary tech, so you'll need this specific charger, this cable and OnePlus’ latest phone to hit those heady charging speeds. OnePlus says it takes 25 minutes to reach a full charge from empty, and in practice, that’s been accurate.
The ability to plug my phone in for a brief stint (roughly ten minutes) and have it top up 50 percent is magical. However, there’s no wireless charging. It’s not a deal breaker for me but it’s definitely something to note as missing from a flagship phone. I’d take these heady charging speeds over wireless charging any day.
Wrap-up
Mat Smith/Engadget
The OnePlus 11 has a great screen (again), incredibly fast-charging (again) and cameras that are better than its predecessors. However, the supercharge speeds seem to be the only unique thing that the OnePlus 11 brings to the table. But is that enough to make you want to upgrade from an older phone, or choose a OnePlus over the competition?
At $699, with 8GB of memory and 128GB of storage, that’s $100 less than last year’s flagship. It’s a much better deal than its predecessor – and this could be a deciding factor. The OnePlus 11 sits between midrange devices, like the Pixel 6a, and premium phones including Google’s Pixel 7 and Samsung’s Galaxy S23 series. In many ways, you get the best of both, but against the dominance of other phone makers, it needs to do more to distinguish itself.
Next year could see the introduction of a new flagship iPhone. According to Bloomberg’s Mark Gurman, Apple is considering whether to release a more expensive iPhone “Ultra” that would slot in above the iPhone Pro and Pro Max. He says the device could arrive as early next year.
If you’ve been following Gurman’s writing for a while, you may recall he previously reported Apple was considering whether to rebrand the upcoming iPhone 15 Pro Max to the iPhone 15 Ultra. Now, he says there’s evidence to suggest Apple wants to instead offer a more powerful and expensive iPhone to well-heeled consumers. Specifically, Gurman points to a recent comment made by Apple CEO Tim Cook. “The iPhone has become so integral [to] people’s lives,” Cook told analysts when he was asked if the increasing average price of the iPhone was sustainable. “I think people are willing to really stretch to get the best they can afford in that category.”
How Apple will differentiate the new model is harder to say. Gurman suggests the iPhone Ultra could feature a faster processor, better camera hardware than the Pro and Pro Max and an even larger display. “There also may be more future-forward features, such as finally dropping the charging port,” he adds.
It’s worth noting reports on the iPhone 15 line suggest Apple is already searching for more ways to differentiate the Pro models from their mainstream siblings. For example, one recent report said the upcoming Pro variants could feature WiFi 6E connectivity, while the iPhone 15 and iPhone 15 Plus ship with older WiFi 6 antennae. The Pro models could come with other differentiating features, including redesigned titanium frames with haptic volume and power buttons. Apple will also reportedly equip the Pro Max with a periscope camera lens.
For a product that its own creators, in a marketing pique, once declared “too dangerous” to release to the general public, OpenAI’s ChatGPT is seemingly everywhere these days. The versatile automated text generation (ATG) system, which is capable of outputting copy that is nearly indistinguishable from a human writer’s work, is officially still in beta but has already been utilized in dozens of novel applications, some of which extend far beyond the roles ChatGPT was originally intended for — like that time it simulated an operational Linux shell or that other time when it passed the entrance exam to Wharton Business School.
But with these technical advancements come with a slew of opportunities for misuse and outright harm. And if our previous hamfisted attempts at handling the spread of deepfake video and audio technologies were any indication, we’re dangerously underprepared for the havoc that at-scale, automated disinformation production will wreak upon our society.
NurPhoto via Getty Images
OpenAI’s billion dollar origin story
OpenAI has been busy since its founding in 2015 as a non-profit by Sam Altman, Peter Thiel, Reid Hoffman, Elon Musk and a host of other VC luminaries, who all collectively chipped in a cool billion dollars to get the organization up and running. The “altruistic” venture argues that AI “should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”
The following year, the company released its first public beta of the OpenAI Gym reinforcement learning platform. Musk resigned from the board in 2018, citing a potential conflict of interest with his ownership of Tesla. 2019 was especially eventful for OpenAI. That year, the company established a “capped” for-profit subsidiary (OpenAI LP) to the original non-profit (OpenAI Inc) organization, received an additional billion-dollar funding infusion from Microsoft and announced plans to begin licensing its products commercially.
In 2020, OpenAI officially launched GPT-3, a text generator able to “summarize legal documents, suggest answers to customer-service enquiries, propose computer code [and] run text-based role-playing games,” The company released its commercial API that year as well.
“I have to say I’m blown away,” startup founder Arram Sabeti wrote at the time, after interacting with the system. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”
2021 saw the release of DALL-E, a text-to-image generator; and the company made headlines again last year with the release of ChatGPT, a chat client based on GPT-3.5, the latest and current GPT iteration. In January 2023, Microsoft and OpenAI announced a deepening of their research cooperative with a multi-year, multi-billion-dollar ongoing investment.
“I think it does an excellent job at spitting out text that's plausible,” Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab and Associate Professor of Technology Policy Research at UC Berkeley, told Engadget. “It feels like somebody really wrote it. I've used it myself actually to kind of get over a writer's block, to just think through how I flow in the argument that I'm trying to make, so I found it helpful.”
That said, Nonnecke cannot look past the system’s stubborn habit of producing false claims. “It will cite articles that don't exist,” she added. “Right now, at this stage, it's realistic but there's still a long way to go.”
What is generative AI?
OpenAI is far from the only player in the ATG game. Generative AI (or, more succinctly, gen-AI) is the practice of using machine learning algorithms to produce novel content — whether that’s text, images, audio, or video — based on a training corpus of labeled example databases. It’s your standard unsupervised reinforcement learning regimen, the likes of which have trained Google’s AlphaGo, song and video recommendation engines across the internet, as well as vehicle driver assist systems. Of course while models like Stability AI’s Stable Diffusion or Google’s Imagen are trained to convert progressively higher resolution patterns of random dots into images, ATGs like ChatGPT remix text passages plucked from their training data to output suspiciously realistic, albeit frequently pedestrian, prose.
“They're trained on a very large amount of input,” Dr. Peter Krapp, Professor of Film & Media Studies at the University of California, Irvine, told Engadget. “What results is more or less… an average of that input. It's never going to impress us with being exceptional or particularly apt or beautiful or skilled. It's always going to be kind of competent — to the extent that we all collectively are somewhat competent in using language to express ourselves.”
Generative AI is already big business. While flashy events like Stable Diffusion’s maker getting sued for scraping training data from Meta or ChatGPT managing to schmooze its way into medical school (yes, in addition to Wharton) grab headlines, Fortune 500 companies like NVIDIA, Facebook, Amazon Web Services, IBM and Google are all quietly leveraging gen-AI for their own business benefit. They’re using it in a host of applications, from improving search engine results and proposing computer code to writing marketing and advertising content.
Wikipedia / Public Domain
The secret to ChatGPT’s success
Efforts to get machines to communicate with us as we do with other people, as Dr. Krapp notes, began in the 1960s and ‘70s with linguists being among the earliest adopters. “They realized that certain conversations can be modeled in such a way that they're more or less self-contained,” he explained. “If I can have a conversation with, you know, a stereotypical average therapist, that means I can also program the computer to serve as the therapist.” Which is how Eliza became an NLP easter egg hidden in Emacs, the popular Linux text editor.
Today, we use the technological descendents of those early efforts to translate the menus at fancy restaurants for us, serve as digital assistants on our phones, and chat with us as customer service reps. The problem, however, is that to get an AI to perform any of these functions, it has to be specially trained to do that one specific thing. We’re still years away from functional general AIs but part of ChatGPT’s impressive capability stems from its ability to write middling poetry as easily as it can generate a fake set of Terms of Service for the Truth Social website in the voice of Donald Trump without the need for specialized training between the two.
This prosaic flexibility is possible because, at its core, ChatGPT is a chatbot. It’s designed first and foremost to accurately mimic a human conversationalist, which it actually did on Reddit for a week in 2020 before being outed. It was trained using supervised learning methods wherein the human trainers initially fed the model both sides of a given conversation — both what the human user and AI agent were supposed to say. With the basics in it robomind, ChatGPT was then allowed to converse with humans with its responses being ranked after each session. Subjectively better responses scored higher in the model’s internal rewards system and were subsequently optimized for. This has resulted in an AI with a silver tongue but a “just sorta skimmed the Wiki before chiming in” aptitude of fact checking.
Part of ChatGPT’s boisterous success — having garnered a record 100 million monthly active users just two months after its launch — can certainly be marked up to solid marketing strategies such as the “too dangerous” neg of 2020, Natasha Allen, a partner at Foley & Lardner LLP, told Engadget. “I think the other part is just how easy it is to use it. You know, the average person can just plug in some words and there you go.”
“People who previously hadn’t been interested in AI, didn't really care what it was,” are now beginning to take notice. Its ease of use is an asset, Allen argues, making ChatGPT “something that's enticing and interesting to people who may not be into AI technologies.”
“It's a very powerful tool,” she conceded. “I don't think it's perfect. I think that obviously there are some errors but… it'll get you 70 to 80 percent of the way.”
Leon Neal via Getty Images
Will Microsoft’s ChatGPT be Microsoft’s Taye for a new generation?
But a lot can go wrong in those last 20 to 30 percent, because ChatGPT doesn’t actually know what the words it’s remixing into new sentences mean, it just understands the statistical relationships between them. “The GPT-3 hype is way too much,” Sam Altman, OpenAI’s chief executive, warned in a July, 2020 tweet. “It’s impressive but it still has serious weaknesses and sometimes makes very silly mistakes.”
Those “silly” mistakes range from making nonsensical comparisons like “A pencil is heavier than a toaster” to the racist bigotry we’ve seen with past chatbots like Taye — well, really, all of them to date if we’re being honest. Some of ChatGPT’s replies have even encouraged self-harm in its users, raising a host of ethical quandaries (not limited to, should AI byline scientific research?) for both the company and field as a whole.
“I'm worried because if we have deep fake video and voice, tying that with ChatGPT, where it can actually write something mimicking the style of how somebody speaks,” Nonnecke said. “Those two things combined together are just a powder keg for convincing disinformation.”
“I think it's gasoline on the fire, because people write and speak in particular styles,” she continued. “And that can sometimes be the tell — if you see a deepfake and it just doesn't sound right, the way that they're talking about something. Now, GPT very much sounds like the individual, both how they would write and speak. I think it's actually amplifying the harm.”
The current generation of celebrity impersonating chatbots aren’t what would be considered historically accurate (Henry Ford’s avatar isn’t antisemitic, for example) but future improvements could nearly erase the lines between reality and created content. “The first way it's going to be used is very likely to commit fraud,“ Nonnecke said, noting that scammers have already leveraged voice cloning software to pose as a mark’s relative and swindle money from them.
“The biggest challenge is going to be how do we appropriately address it, because those deep fakes are out. You already have the confusion,” Nonnecke said. “Sometimes it's referred to as the liars dividend: nobody knows if it's true, then sort of everything's a lie, and nothing can be trusted.”
Donato Fasano via Getty Images
ChatGPT goes to college
ChatGPT is raising hackles across academia as well. The text generator has notably passed the written portion of Wharton Business School’s entrance exam, along with all three parts of the US Medical Licensing exam. The response has been swift (as most panicked scramblings in response to new technologies tend to be) but widely varied. The New York City public school system took the traditional approach, ineffectually “banning” the app’s use by students, while educators like Dr. Ethan Mollick, associate professor at the University of Pennsylvania's prestigious Wharton School, have embraced it in their lesson plans.
"This was a sudden change, right? There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of how we teach people to write in a world with ChatGPT," Mollick told NPR in January.
"The truth is, I probably couldn't have stopped them even if I didn't require it," he added. Instead, Mollick has his students use ChatGPT as a prompt and idea generator for their essay assignments.
UCI’s Dr. Krapp has taken a similar approach. “I'm currently teaching a couple of classes where it was easy for me to say, ‘okay, here's our writing assignment, let's see what ChadGPT comes up with,’’ he explained. “I did the five different ways with different prompts or partial prompts, and then had the students work on, ‘how do we recognize that this is not written by a human and what could we learn from this?’.”
Is ChatGPT coming for your writing job?
At the start of the year, tech news site CNET was outed for having used an ATG of its own design to generate entire feature-length financial explainer articles — 75 in all since November 2022. The posts were supposedly “rigorously” fact checked by human editors to ensure their output was accurate, though cursory examinations uncovered rampant factual errors requiring CNET and its parent company, Red Ventures, to issue corrections and updates for more than half of the articles.
BuzzFeed’s chief, Jonah Peretti, upon seeing the disastrous fallout CNET was experiencing from this computer generated dalliance, immediately decided to stick his tongue in the outlet too, announcing that his publication plans to employ gen-AI to create low-stakes content like personality quizzes.
This news came mere weeks after BuzzFeed laid off a sizable portion of its editorial staff on account of “challenging market conditions.” The coincidence is hard to ignore, especially given the waves of layoffs currently rocking the tech and media sectors for that specific reason, even as the conglomerates themselves bathe in record revenue and earnings.
This is not the first time that new technology has displaced existing workers. NYT columnist Paul Krugman points to coal mining as an example. The industry saw massive workforce reductions throughout the 20th century, not because our use of coal decreased, but because mining technologies advanced enough that fewer humans were needed to do the same amount of work. The same effect is seen in the automotive industry with robots replacing people on assembly lines.
“It is difficult to predict exactly how AI will impact the demand for knowledge workers, as it will likely vary, depending on the industry and specific job tasks,” Krugman opined. “However, it is possible that in some cases, AI and automation may be able to perform certain knowledge-based tasks more efficiently than humans, potentially reducing the need for some knowledge workers.”
However, Dr. Krapp is not worried. “I see that some journalists have said, ‘I'm worried. My job has already been impacted by digital media and digital distribution. Now the type of writing that I do well, could be done by computer for cheap much more quickly,’ he said. “I don't see that happening. I don't think that's the case. I think we still as humans, have a need — a desire — for recognizing in others what's human about them.”
“[ChatGPT is] impressive. It's fun to play with, [but] we're still here,” he added, “We're still reading, it's still meant to be a human size interface for human consumption, for human enjoyment.”
Fear not for someone is sure to save us, probably
ChatGPT’s shared-reality shredding fangs will eventually be capped, Nonnecke is confident, whether by congress or the industry itself in response to public pressure. “I actually think that there's bipartisan support for this, which is interesting in the AI space,” she told Engadget. “And in data privacy, data protection, we tend to have bipartisan support.”
She points to efforts in 2022 spearheaded by OpenAI Safety and Alignment researcher Scott Aaronson to develop a cryptographic watermark so that the end user could easily spot computer generated material, as one example of the industry’s attempts to self-regulate.
“Basically, whenever GPT generates some long text, we want there to be an otherwise unnoticeable secret signal in its choices of words, which you can use to prove later that, yes, this came from GPT,” Aaronson wrote on his blog. “We want it to be much harder to take a GPT output and pass it off as if it came from a human. This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda.”
The efficacy of such a safeguard remains to be seen. “It's very much whack-a-mole, right now,” Nonnecke exclaimed. “It's the company themselves making that [moderation] decision. There's no transparency in how they're deciding what types of prompts to block or not block, which is very concerning to me.”
“Somebody's going to use this to do terrible things,” she said.
Amazon's drone delivery program doesn't seem to be off to a great start. The Prime Air division was said to be hit hard by recent, widespread layoffs. Now, a new report indicates that Amazon's drones have made just a handful of deliveries in their first few weeks of operation.
After nearly a decade of working on the program, Amazon said in December that it would start making deliveries by drone in Lockeford, California, and College Station, Texas. However, by the middle of January, as few as seven houses had received Amazon packages by drone, according to The Information: two in California and five in Texas.
The report suggests that Amazon has been hamstrung by the Federal Aviation Administration, which is said to be blocking drones from flying over roads or people unless the company gets permission on a case-by-case basis. Although Amazon had touted its FAA certification, the agency imposed a string of restrictions, which hadn't been revealed until now. It has largely rejected Amazon's requests to loosen the limitations.
One of the plans the FAA agreed to, according to the report, was for Amazon employees to check no cars were passing on surrounding roads before drones left its Lockeford delivery facility. That depot is on an industrial block, and the drones need to fly over at least one road before getting to any homes.
Amazon's drones are far heavier than ones operated by Wing, as well as Walmart’s partners Flytrex and Zipline. Those weigh between 10 and 40 pounds. Amazon's drone, on the other hand, weighs around 80 pounds and can only carry a five-pound payload. The report suggests the drone's mass could be causing concern among FAA officials. The agency has given Wing, Flytrex and Zipline permission to fly over roadways — to date, Wing has carried out more than 300,000 deliveries.
One other aspect that doesn't help Amazon's prospects is that folks who want to receive deliveries by drone need a backyard where packages can be dropped off — so apartment dwellers need not apply. The drone can only carry a certain size of box and it dumps packages from 12 feet in the air, further limiting the types of products it can transport.
“We meet or exceed all safety standards and have obtained regulatory authorization to conduct commercial drone delivery operations," Amazon spokesperson Maria Boschetti told The Information. "We welcome the FAA’s rigorous evaluations of our operation, and we’ll continue to champion the significant role that regulators play to ensure all drone companies are achieving the right design, build and operating standards." Boschetti added that the Prime Air layoffs, which have reportedly slashed the size of the delivery teams at both locations by more than half, have not affected Amazon's plans for the test sites.
After a few years of staying mostly under the radar, Instagram co-founders Kevin Systrom and Mike Krieger are back with a new project. It’s an app called Artifact, a name Systrom told Platformer’s Casey Newton is designed to evoke the project’s three tenants: “articles, facts and artificial intelligence.” In short, it’s a news aggregation app driven by a TikTok-like recommendation algorithm.
When you first launch Artifact, you’ll see a central feed populated by stories from publications like The New York Times. As you read more articles, the app will begin personalizing your feed. According to Systrom, the recommendation system Artifact’s team of seven built prioritizes how long you spend reading about certain subjects over clicks and comments. He added Artifact will feature news stories from both left and right-leaning outlets, though the company won’t allow posts that “promote falsehoods."
In the future, the app will also feature a social component. Systrom and Krieger plan to roll out a feed that will highlight articles from users you follow, alongside their commentary on that content. Additionally, you’ll be able to privately discuss posts through a direct-message inbox. At the moment, Systrom and Krieger are funding the project with their own money. They say Artifact represents a first attempt to imagine what the next generation of social apps could look like. If you want to give what they created a try, you can join a waiting list for the app’s iOS and Android beta. Systrom said the team plans to invite new users quickly.
The United States government has reportedly stopped issuing licenses that allow companies in the country to export to Huawei, according to The Financial Times. If you'll recall, the Trump administration added the company to the "entity list," making it ineligible from receiving exports from the US without a license. The US commerce department issued some companies like Qualcomm licenses to provide Huawei with American tech unrelated to 5G networks since then — Qualcomm, for instance, supplies Huawei with 4G chips for smartphones. But the government is reportedly looking to impose a total ban on the sale of American tech to the Chinese firm, and this expanded restriction is a step towards making that happen.
The US government adds companies to the entity list if it believes they are involved in or "pose a significant risk of being or becoming involved in, activities contrary to the national security or foreign policy interests of the United States." It has previously accused Huawei of having deep ties with the Chinese government and warned allies that the 5G equipment it makes could be used to spy on other countries and companies. Huawei has repeatedly denied the accusation.
It's not entirely clear why the US government is moving towards a total ban, if this report is indeed true, but the Biden administration seems to be taking a tougher stance on China compared to its predecessor. Last year, it introduced new rules that prohibit the export of powerful semiconductors that could be repurposed for military use, as well as chipmaking equipment, to China and Russia. One possible reason is that Huawei, The Times says, is backing projects that aim to build a semiconductor supply chain in its country that doesn't rely on imports. A former CIA official also told the publication that the government is probably looking to expand the existing export ban, because Huawei is a totally different company from when it was added to the entity list.
Huawei's focus back then was on 5G technology, but it has since changed gears to prioritize its enterprise and government businesses, including a cloud service, to survive the trade ban. Being added to the blacklist had a huge impact on Huawei's revenues in 2021, but company executive Eric Xu said the manufacturer was able to pull itself "out of crisis mode" in 2022 and expects to go back to "business as usual" this year. A total ban could very well put Huawei back into crisis mode, and it would likely affect the revenues of its US suppliers, as well. That said, the Chinese company might have some time to prepare, depending on when the export licenses that had already been issued will expire.
A commerce department spokesperson didn't confirm whether it has truly stopped issuing licenses to American firms, telling The Times that it "continually assess[es] its policies and regulations." A source told Reuters, however, that US officials are in the midst of crafting new policies that would prohibit shipments to Huawei below the 5G level. The new restrictions would reportedly cover products and components related to 4G, WiFi 6 and 7, AI, as well as cloud and high-performance computing.
Shou Zi Chew, the CEO of TikTok, will testify before the House Energy and Commerce Committee on March 23rd. Chow will discuss the app's privacy and data security measures, its impact on kids and ties to China (parent company ByteDance is headquartered in the country). This will be Chew's first appearance in front of a congressional panel, the committee said. TikTok COO Vanessa Pappas faced similar questions from lawmakers in September.
"ByteDance-owned TikTok has knowingly allowed the ability for the Chinese Communist Party to access American user data," committee chair Cathy McMorris Rodgers said in a statement. "Americans deserve to know how these actions impact their privacy and data security, as well as what actions TikTok is taking to keep our kids safe from online and offline harms. We’ve made our concerns clear with TikTok. It is now time to continue the committee’s efforts to hold Big Tech accountable by bringing TikTok before the committee to provide complete and honest answers for people.”
Engadget has contacted TikTok for comment.
TikTok's security and relationship with Chinese authorities have drawn the attention of US officials over the last few years. However, as CNBC notes, discussions between the US and TikTok appear to have stalled, as officials remain concerned about the possibility of China forcing it to hand over user data.
The company has tried to placate concerns from regulators and elected officials by storing US user data on domestic Oracle servers and deleting such data from its own servers in the US and Singapore. Oracle has been reviewing TikTok's algorithms and content moderation models for signs of Chinese interference.
Last month, TikTok said it fired four employees (two each in China and the US) who accessed the data of several journalists. They were said to be looking for the sources of leaks to reporters.
News of Chew's appearance before the panel comes on Data Privacy Day. In a blog post, TikTok laid out some of its efforts to bolster user privacy, including a plan to set up a data center in Dublin this year to store UK and European Economic Area data.
The Biden administration has reportedly reached an agreement with the Netherlands and Japan to restrict China’s access to advanced chipmaking machinery. According to Bloomberg, officials from the two countries agreed on Friday to adopt some of the same export controls the US has used over the last year to prevent companies like NVIDIA from selling their latest technologies in China. The agreement would reportedly see export controls imposed on companies that produce lithography systems, including ASML and Nikon.
Bloomberg reports the US, Netherlands and Japan don’t plan to announce the agreement publicly. Moreover, implementation could take “months” while the countries work to hammer out the legal details. “Talks are ongoing, for a long time already, but we don’t communicate about this. And if something would come out of this, it is questionable if this will be made very visible,” said Dutch Prime Minister Mark Rutte on Friday, responding to a question about the negotiations.
According to Bloomberg, the agreement will cover “at least” some of ASML’s immersion lithography machines. As of last year, ASML was the only company in the world producing the extreme ultraviolet lithography (EUV) machines chipmakers need to make the 5nm and 3nm semiconductors that power the latest smartphones and computers. Cutting off China from ASML’s products is an effort by the Biden administration to freeze the country’s domestic chip industry. Last summer, Chinese state media reported that SMIC, China’s leading semiconductor manufacturer, had begun volume production of 14nm chips and had successfully started making 7nm silicon without access to foreign chip-making equipment. China has said SMIC is working on making 5nm semiconductors, but it’s unclear how the company will do that without access to EUV machines.