Posts with «author_name|pranav dixit» label

Apple may integrate Google's Gemini AI into iOS in the future

Apple is integrating GPT-4o, the large language model that powers ChatGPT into iOS 18, iPadOS 18 and MacOS Sequioa thanks to a partnership with OpenAI announced at WWDC, the company’s annual developer conference, on Monday. But shortly after the keynote ended, Craig Federighi, Apple’s senior vice president of software engineering said that the company might also bake in Gemini, Google’s family of large language model, into its operating systems.

“We want to enable users ultimately to choose the models they want, maybe Google Gemini in the future,” Federighi said in a conversation with reporters after the keynote. “Nothing to announce right now.”

The news is notable because even though Apple did mention plans to add more AI models into its operating system in the keynote, it didn’t mention Gemini specifically. Letting people choose the AI model they want on their devices instead of simply foisting one on them would give Apple devices a level of customization that none of its competitors like Google or Samsung have.

Catch up here for all the news out of Apple's WWDC 2024.

This article originally appeared on Engadget at https://www.engadget.com/apple-may-integrate-googles-gemini-ai-into-ios-in-the-future-220240081.html?src=rss

Apple's first attempt at AI is Apple Intelligence

Apple is going all in on AI in the most Apple way possible. At WWDC, the company’s annual conference for developers, the company revealed Apple Intelligence, an Apple-branded version of AI that is more focused on infusing its software with the technology and upgrading existing apps to make them more useful. 

On supported devices, Apple Intelligence will be able to quickly summarize web pages in Safari, a feature that already exists on rival web browsers like Arc. You’ll also be able to use Apple Intelligence to quickly catch up on priority notifications. And just like Gmail and Outlook, your devices will be able create fleshed out responses to emails and text messages on your behalf.

Apple’s AI updates are a long time coming. The technology has shaken up Silicon Valley ever since OpenAI launched ChatGPT around the end of 2022. Since then, Apple’s rivals like Google, Samsung and Microsoft, as well as companies like Meta have raced to integrate AI features in all their primary products. Last month, Google announced that AI would be a cornerstone of the next version of Android and made major AI-powered changes to its search engine. Samsung, Apple’s primary smartphone competitor, added AI features to its phones earlier this year that can translate calls in real time and edit photos. Microsoft, too, unveiled AI-powered Copilot PCs, aimed at infusing Windows with AI features that include live captioning, image editing, and beefing up systemwide search.

This is a developing story...

Catch up here for all the news out of Apple's WWDC 2024.

This article originally appeared on Engadget at https://www.engadget.com/apples-first-attempt-at-ai-is-apple-intelligence-181444846.html?src=rss

Apple will reportedly build a dedicated Passwords app for the iPhone and Mac

Apple plans to build a password management app right into the next versions of iPhone and Mac operating systems, reported Bloomberg’s Mark Gurman on Thursday. The new app, simply called Passwords, will compete against existing password managers like 1Password and LastPass, which typically charge people a monthly fee for generating and storing unique passwords. Apple plans to reveal the app at the company’s annual Worldwide Developers Conference on June 10. 

Apple already generates and stores unique passwords through iCloud Keychain, a feature that syncs passwords across all Apple devices you own as well as Windows PCs through a browser extension. But passwords stored in iCloud Keychain live — weirdly — in the Settings app, often making them cumbersome to find or change. Having a dedicated app for passwords built into Apple devices would not only make this easier but also give people one more reason to stay in the Apple ecosystem.

Just like its rivals, Apple’s Passwords app will reportedly split passwords into different categories like accounts, WiFi networks, and Passkeys (here’s our deep dive explaining how they work). It will also allow you to import passwords from rival apps and will fill them in automatically when your device detects you’re logging into a website or an app. Passwords will also work on Apple’s $4,000 Vision Pro headset, and, just like Google Authenticator and Authy, will support two-factor verification codes. What is still unclear is whether the Passwords app will let you securely store files and images in addition to passwords, something that both 1Password and LastPass offer.

In addition to Passwords, Apple is expected to reveal the next versions of iOS, iPadOS, MacOS, WatchOS and VisionOS on Monday. The new versions of the software will reportedly be infused with brand new AI features.

This article originally appeared on Engadget at https://www.engadget.com/apple-will-reportedly-build-a-dedicated-passwords-app-for-the-iphone-and-mac-211812245.html?src=rss

DuckDuckGo dips Into the AI chatbot pond

Because there simply aren’t enough AI-powered chatbots out there, we’re getting one more. This one, called AI Chat, comes courtesy of DuckDuckGo, the privacy-focused search engine that obviously doesn’t want to feel left behind in the AI arms race. The company has been testing AI Chat over the last few months, but as of today, it’s available to everyone.

Unlike other standalone bots like Google’s Gemini and OpenAI’s ChatGPT that are powered by their own large language models, DuckDuckGo’s AI Chat is not. Instead, think of it as a way to access multiple chatbots in a single place. Right now, AI chat will let you choose between OpenAI’s GPT 3.5, Anthropic’s Claude 3 Haiku, Meta’s Llama 3 and Mistral’s Mistral 8x7B, and the company says that more models are coming soon. The main differences between them largely boil down to how many parameters — technical speak for the settings that a large language model can tweak to give you an answer — each one has. If you don’t like a model’s answer, you can try another one. 

AI Chat is free to use but is capped with the daily limit. The company said that it’s exploring a paid plan that will give access to higher limits as well as more advanced AI models. DuckDuckGo said that you can use AI Chat to ask questions, draft emails, write code, and create travel itineraries among other things, but it doesn't generate images yet.

Not surprisingly, DuckDuckGo is stressing how private using AI Chat is compared to just using ChatGPT or Claude on their own. The company claims that your questions and the generated answers won’t be used for training AI models. "[We] call the underlying chat models on your behalf, removing your IP address completely and using our IP address instead,” wrote Nirzar Pangarkar, DuckDuckGo’s lead designer in a blog post. “This way it looks like the requests are coming from us and not you.” And if you’d rather not deal with a chatbot when you’re just trying to search anonymously, DuckDuckGo lets you easily turn the feature off.

AI Chats is separate from DuckAssist, another AI-powered feature that DuckDuckGo added last year that provides AI-generated summaries at the top of search results. It’s just like Google’s AI Overviews, the controversial new feature that recently told people to eat rocks and put glue on their pizza, except that DuckAssist sticks to reliable sources like Wikipedia to generate its responses. DuckDuckGo thinks that AI Chat and DuckAssist are complementary. “If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw,” wrote Pangarkar. “It’s all down to your personal preference.”

This article originally appeared on Engadget at https://www.engadget.com/duckduckgo-dips-into-the-ai-chatbot-pond-120035373.html?src=rss

Former OpenAI, Google and Anthropic workers are asking AI companies for more whistleblower protections

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic have signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman called the provision "genuinely embarrassing" and claims it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees.

The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo said that he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term for AI systems that is as smart or smarter than humans. The letter — which was endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell — expresses grave concerns over the lack of effective government oversight for AI and the financial incentives driving tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbation of inequality and even the loss of human control over autonomous systems, potentially resulting in human extinction.

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas,” wrote Kokotajlo on X. “Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to ‘move fast and break things.’ Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

OpenAI, Google and Anthropic did not immediately respond to request for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems" and it believes in its "scientific approach to addressing risk.” It added: “We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.”

The signatories are calling on AI companies to commit to four key principles:

  • Refraining from retaliating against employees who voice safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators about risks

  • Allowing a culture of open criticism

  • And avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out

The letter comes amid growing scrutiny of OpenAI's practices, including the disbandment of its "superalignment" safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company's prioritization of "shiny products" over safety.

This article originally appeared on Engadget at https://www.engadget.com/former-openai-google-and-anthropic-workers-are-asking-ai-companies-for-more-whistleblower-protections-175916744.html?src=rss

OpenAI says it stopped multiple covert influence operations that abused its AI models

OpenAI said that it stopped five covert influence operations over the last three months that used its AI models for deceptive activities across the internet. These operations, which originated from Russia, China, Iran and Israel, attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.

OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”

OpenAI said that the Russian operation, which it dubbed “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.

This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss

OpenAI has a has a new version of ChatGPT just for universities

OpenAI is bringing ChatGPT to college campuses across the country. On Thursday, the company announced ChatGPT Edu, a version of ChatGPT built specifically for students, academics, faculty. “ChatGPT Edu is designed for schools that want to deploy AI more broadly to students and their campus communities,” the company said in a blog post.

ChatGPT Edu includes access to GPT-4o, OpenAI’s latest large language model that the company revealed earlier this month. OpenAI claims that the model is much better than its previous versions at interpreting text, coding, and mathematics, analyzing data sets, and being able to access the web. ChatGPT Edu will also have significantly higher message limits than the free version of ChatGPT and allow universities to build custom versions of ChatGPT trained on their own data — confusingly called GPTs — and share them within university workspaces. OpenAI claims that conversations and data from ChatGPT Edu won’t be used to train OpenAI’s models.

Although the introduction of ChatGPT in late 2022 initially raised concerns about academic integrity and potential misuse in educational environments, universities have increasingly been experimenting with using generative AI for both teaching as well as research. OpenAI said that it built ChatGPT Edu after it saw Wharton, Arizona State University and Columbia among others using ChatGPT Enterprise.

MBA undergrads at Wharton, for instance, completed their final reflection assignments by training a GPT trained on course materials have having discussions with the chatbot, while Arizona State University is experimenting with its own GPTs that engage German conversations with students learning the language.

This article originally appeared on Engadget at https://www.engadget.com/openai-has-a-has-a-new-version-of-chatgpt-just-for-universities-191350708.html?src=rss

OpenAI's board allegedly learned about ChatGPT launch on Twitter

Helen Toner, one of OpenAI’s former board members who was responsible for firing CEO Sam Altman last year, revealed that the company’s board didn’t know about the launch of ChatGPT until it was released in November 2022. “[The] board was not informed in advance of that,” Toner said on Tuesday on a podcast called The Ted AI Show. “We learned about ChatGPT on Twitter.”

Toner’s comments came just two days after criticized the way OpenAI was governed in an Economist piece published on Sunday that she co-wrote with Tasha McCauley, another former OpenAI board member. This is the first time that Toner has spoken openly about the circumstances that led to Altman’s dramatic ouster from the company he co-founded in 2015, and his quick reinstatement following protests from employees.

In the podcast, Toner, who is current a director of strategy at the Centre for Security and Emerging Technology at Georgetown, said that Altman had made it hard for OpenAI’s board to do its job by withholding information, misrepresenting things, and, “in some cases outright lying to the board.” She added that Altman also hid the company’s ownership structure from the board. “Sam didn’t inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company,” Toner said. Altman’s actions “really damaged our ability to trust him,” she said, and by October 2023, the board was “already talking pretty seriously about whether we needed to fire him.”

She criticized Altman’s leadership on safety concerns around AI, saying that he often gave the board inaccurate information on the company’s safety processes, “meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.”

When asked for comment, an OpenAI spokesperson referred Engadget to the statement the company provided to The TED AI Show. “We are disappointed that Ms. Toner continues to revisit these issues,” Bret Taylor, OpenAI’s current board chief and co-CEO of Salesforce told the podcast. An independent review of Altman’s firing, he added, “concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

The exact reasons for Altman’s abrupt ouster last year were still unclear and have been a source of intense speculation in Silicon Valley. In March, Altman was reinstated to the board by a group of temporary board members which included Taylor, economist Larry Summers, OpenAI co-founder Greg Brockman, Instacart CEO and former Meta executive Fiji Simo, former Sony executive Nicole Seligman, and former CEO of the Bill and Melinda Gates Foundation Dr. Sue Desmond-Hellmann. In an independent investigation, law firm WilmerHale found that Toner’s decision to fire Altman along with the rest of OpenAI’s previous Board “was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman.” WilmerHale also found that OpenAI’s previous board had fired Altman “abruptly” and without giving him a chance to respond to its concerns.

Toner’s revelations are the latest controversy that OpenAI, company that sparked off the modern AI revolution, has been involved in. Over the last few days, multiple safety researchers left the company, publicly criticizing its leadership on their way out. OpenAI also backtracked on non-disparagement agreements it had required departing employees to sign after a Vox investigation, and forced to explain itself after actor Scarlet Johansson accused the company of copying her voice for ChatGPT despite denying permission.

This article originally appeared on Engadget at https://www.engadget.com/openais-board-allegedly-learned-about-chatgpt-launch-on-twitter-235643014.html?src=rss

Samsung reportedly requires independent repair stores to rat on customers using aftermarket parts

If you take in your Samsung device to an independent shop for repair, Samsung requires the store to send your name, contact information, device identifier, and the nature of your complaint to the mothership. Worse, if the repair store detects that your device has been previously repaired with an aftermarket or a non-Samsung part, Samsung requires the establishment to “immediately disassemble” your device and “immediately notify” the company.

These details were revealed thanks to 404 Media, which obtained a contract that Samsung requires all independent repair stores to sign in exchange for selling them genuine repair parts. Here’s the relevant section from the contract: “Company shall immediately disassemble all products that are created or assembled out of, comprised of, or that contain any Service Parts not purchased from Samsung.” It adds that the store “shall immediately notify Samsung in writing of the details and circumstances of any unauthorized use or misappropriation of any Service Part for any purpose other than pursuant to this Agreement. Samsung may terminate this Agreement if these terms are violated.” Samsung did not respond to a request for comment from Engadget.

Samsung’s contract is troubling — customers who take their devices to independent repair stores do not necessarily expect their personal information to the sent to the device manufacturer. And if they’ve previously repaired their devices by using third-party parts that are often vastly cheaper than official ones (and just as good in many cases), they certainly do not expect an repair store to snitch on them to the manufacturer and have their device rendered unusable.

Experts who spoke to 404 Media said that consumers are within their rights to use third-party parts to repair devices they own under the Magnuson Moss Warranty Act, a federal law that governs consumer product warranties in the US.So far, Right to Repair legislation exists in 30 states in the country according to the Public Interest Research Group (PIRG), a consumer advocacy organization. But in states like New York, Minnesota and California where this legislation goes into effect this year, contracts like the one Samsung makes repair stores sign would be illegal, 404 Media pointed out.

“This is exactly the kind of onerous, one-sided ‘agreement’ that necessitates the right-to-repair,” Kit Walsh, a staff attorney at the Electronic Freedom Foundation told the publication. “In addition to the provision you mentioned about dismantling devices with third-party components, these create additional disincentives to getting devices repaired, which can harm both device security and the environment as repairable devices wind up in landfills.”

This isn’t the only incident around device repair that Samsung has found itself in hot water. Hours before the report from 404 Media, repair blog and parts retailer iFixit announced that it was ending its collaboration with Samsung to launch a “Repair Hub” less than two years into the partnership. “Samsung’s approach to repairability does not align with our mission,” iFixit said in a blog post, citing the high prices of Samsung’s parts and the unrepairable nature of Samsung’s devices that “remained frustratingly glued together” as reasons for pulling the plug.

This article originally appeared on Engadget at https://www.engadget.com/samsung-reportedly-requires-independent-repair-stores-to-rat-on-customers-using-aftermarket-parts-203925729.html?src=rss

OpenAI will reportedly pay $250 million to put News Corp's journalism in ChatGPT

OpenAI and News Corp, the owner of The Wall Street Journal, MarketWatch, The Sun, and more than a dozen other publishing brands, have struck a multi-year deal to display news from these publications in ChatGPT, News Corp announced on Wednesday. OpenAI will be able to access both current and well as archived content from News Corp’s publications and use the data to further train its AI models. Neither company disclosed the terms of the deal, but a report in The Wall Street Journal estimated that News Corp would get $250 million over five years in cash and credits.

“The pact acknowledges that there is a premium for premium journalism,” News Corp Chief Executive Robert Thomson reportedly said in a memo to employees on Wednesday. “The digital age has been characterized by the dominance of distributors, often at the expense of creators, and many media companies have been swept away by a remorseless technological tide. The onus is now on us to make the most of this providential opportunity.”

Generative AI has exploded in popularity ever since OpenAI released ChatGPT at the end of 2022. But the quality of the responses provided by AI-powered chatbots is only as good as the data that is used to train the models that power it. So far, AI companies have trained their models by scraping publicly available data from the internet often without the consent of creators. But in recent times, they have been striking financial deals with the news industry to make sure that AI models can be trained on information that is current and authoritative. Over the last few months alone, OpenAI has announced partnerships with Reddit, the Financial Times, Dotdash Meredith, the Associated Press, German publisher Axel Springer, which owns Politico and Business Insider in the US and Bild and Die Welt in Germany, and Spain’s Prisa Media. Last month, News Corp also struck a deal reportedly between $5 and $6 million with Google to train its AI models, according to a report in The Information.

Google and OpenAI aren’t the only companies striking these deals to train their AI models. Hours before the News Corp announcement, Business Insider reported that Meta, which recently stuffed its own AI chatbot into Facebook, Messenger, WhatsApp, and Instagram, and also sells AI-powered sunglasses, was thinking about striking its own deals with news publishers to get access to training data.

Money from AI companies is increasingly a growing revenue source for a struggling news industry. But some publishers are still wary of striking these deals. The New York Times has sued OpenAI and Microsoft over using content for training AI systems. And the NYT, the BBC and The Verge have blocked OpenAI from scraping their websites.

This article originally appeared on Engadget at https://www.engadget.com/openai-will-reportedly-pay-250-million-to-put-news-corps-journalism-in-chatgpt-214615249.html?src=rss