Valve is giving Steam Deck users with slow internet connections or bandwidth caps a new way to install games on their devices. The latest Steam and Steam Deck betas add local network game transfers, a feature that allows you to copy existing files from one PC to another over a local area network. Valve says the tool can reduce internet traffic and lessen the time it takes to install games and updates since you can use it to bypass the need to connect to a Steam content server over the internet.
Hello! We've just shipped a Beta update to Steam and Steam Deck that includes a new feature: Local Network Game Transfers.
This allows Steam users to install games directly from one PC to another over a local network, without having to download and install from the internet. pic.twitter.com/bv9xThZCoS
“Local Network Game Transfers are great for Steam Deck owners, multi-user Steam households, dorms, LAN parties, etc,” the company points out. “No more worries about bandwidth or data caps when all the files you need are already nearby.” Once you’ve installed the new software on your devices, Steam will first check if it can transfer a game installation or set of update files over your local network before contacting a public Steam content server. If at any point one of the devices involved in the transfer is disconnected from your local network, Steam will fall back to downloading any necessary files from the internet.
By default, the feature is set to only work between devices logged into the same Steam account, but you can also transfer files between friends on the same local area network. It’s also possible to transfer to any user on the same network, which is something you would do during a LAN tournament. Valve has published a FAQ with more information about local network game transfers, including details on some of the limitations of the feature, over on the Steam website.
Great news everyone, we’re pivoting to chatbots! Little did OpenAI realize when it released ChatGPT last November that the advanced LLM (large language model) designed to uncannily mimic human writing would become the fastest growing app to date with more than 100 million users signing up over the past three months. Its success — helped along by a $10 billion, multi-year investment from Microsoft — largely caught the company’s competition flat-footed, in turn spurring a frenetic and frantic response from Google, Baidu and Alibaba. But as these enhanced search engines come online in the coming days, the ways and whys of how we search are sure to evolve alongside them.
“I'm pretty excited about the technology. You know, we've been building NLP systems for a while and we've been looking every year at incremental growth,” Dr. Sameer Singh, Associate Professor of Computer Science at the University of California, Irvine (UCI), told Engadget. “For the public, it seems like suddenly out of the blue, that's where we are. I've seen things getting better over the years and it's good for all of this stuff to be available everywhere and for people to be using it.”
As to the recent public success of large language models, “I think it's partly that technology has gotten to a place where it's not completely embarrassing to put the output of these models in front of people — and it does look really good most of the time,” Singh continued. “I think that that’s good enough.”
JASON REDMOND via Getty Images
“I think it has less to do with technology but more to do with the public perception,” he continued. “If GPT hadn't been released publicly… Once something like that is out there and it's really resonating with so many people, the usage is off the charts.”
Search providers have big, big ideas for how the artificial intelligence-enhanced web crawlers and search engines might work and damned if they aren’t going to break stuff and move fast to get there. Microsoft envisions its Bing AI to serve as the user’s “copilot” in their web browsing, following them from page to page answering questions and even writing social media posts on their behalf.
This is a fundamental change from the process we use today. Depending on the complexity of the question users may have to visit multiple websites, then sift through that collected information and stitch it together into a cohesive idea before evaluating it.
“That's more work than having a model that hopefully has read these pages already and can synthesize this into something that doesn't currently exist on the web,” Brendan Dolan-Gavitt, Assistant Professor in the Computer Science and Engineering Department at NYU Tandon, told Engadget. “The information is still out there. It's still verifiable, and hopefully correct. But it's not all in place.”
For its part, Google’s vision of the AI-powered future has users hanging around its search page rather than clicking through to destination sites. Information relevant to the user’s query would be collected from the web, stitched together by the language model, then regurgitated as an answer with reference to the originating website displayed as footnotes.
This all sounds great, and was all going great, right up to the very first opportunity for something to go wrong. When it did. In its inaugural Twitter ad — less than 24 hours after debuting — Bard, Google’s answer to ChatGPT, confidently declared, “JWST took the very first pictures of a planet outside of our own solar system.” You will be shocked to learn that the James Webb Space Telescope did not, in fact, discover the first exoplanet in history. The ESO’s Very Large Telescope holds that honor from 2004. Bard just sorta made it up. Hallucinated it out of the digital ether.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3pic.twitter.com/JecHXVmt8l
Of course this isn’t the first time that we’ve been lied to by machines. Search has always been a bit of a crapshoot, ever since the early days of Lycos and Altavista. “When search was released, we thought it was ‘good enough’ though it wasn't perfect,” Singh recalled. “It would give all kinds of results. Over time, those have improved a lot. We played with it, and we realized when we should trust it and when we shouldn’t — when we should go to the second page of results, and when we shouldn't.”
The subsequent generation of voice AI assistants evolved through the same base issues that their text-based predecessors did. “When Siri and Google Assistant and all of these came out and Alexa,” Singh said, “they were not the assistants that they were being sold to us as.”
The performance of today’s LLMs like Bard and ChatGPT, are likely to improve along similar paths through their public use, as well as through further specialization into specific technical and knowledge-based roles such as medicine, business analysis and law. “I think there are definitely reasons it becomes much better once you start specializing it. I don't think Google and Microsoft specifically are going to be specializing it too much — their market is as general as possible,” Singh noted.
In many ways, what Google and Bing are offering by interposing their services in front of the wider internet — much as AOL did with the America Online service in the ‘90s — is a logical conclusion to the challenges facing today’s internet users.
The Washington Post via Getty Images
“Nobody's doing the search as the end goal. We are seeking some information, eventually to act on that information,” Singh argues. “If we think about that as the role of search, and not just search in the literal sense of literally searching for something, you can imagine something that actually acts on top of search results can be very useful.”
Singh characterizes this centralization of power as, “a very valid concern. Simply put, if you have these chat capabilities, you are much less inclined to actually go to the websites where this information resides,” he said.
It’s bad enough that chatbots have a habit of making broad intellectual leaps in their summarizations, but the practice may also “incentivize users not go to the website, not read the whole source, to just get the version that the chat interface gives you and sort of start relying on it more and more,” Singh warned.
In this, Singh and Dolan-Gavitt agree. “If you’re cannibalizing from the visits that a site would have gotten, and are no longer directing people there, but using the same information, there's an argument that these sites won't have much incentive to keep posting new content.” Dolan-Gavitt told Engadget. “On the other hand the need for clicks also is one of the reasons we get lots of spam and is one of the reasons why search has sort of become less useful recently. I think [the shortcomings of search are] a big part of why people are responding more positively to these chatbot products.”
That demand, combined with a nascent marketplace, is resulting in a scramble among the industry’s major players to get their products out yesterday, ready or not, underwhelming or not. That rush for market share is decidedly hazardous for consumers. Microsoft’s previous foray into AI chatbots, 2014’s Taye, ended poorly (to put it without the white hoods and goose stepping). Today, Redditors are already jailbreaking OpenAI to generate racist content. These are two of the more innocuous challenges we will face as LLMs expand in use but even they have proven difficult to stamp out in part, because they require coordination amongst an industry of viscous competitors.
“The kinds of things that I tend to worry about are, on the software side, whether this puts malicious capabilities into more hands, makes it easier for people to write malware and viruses,” Dolan-Gavitt said. “This is not as extreme as things like misinformation but certainly, I think it'll make it a lot easier for people to make spam.”
“A lot of the thinking around safety so far has been predicated on this idea that there would be just a couple kinds of central companies that, if you could get them all to agree, we could have some safety standards.” Dolan-Gavitt continued. “I think the more competition there is, the more you get this open environment where you can download an unrestricted model, set it up on your server and have it generate whatever you want. The kinds of approaches that relied on this more centralized model will start to fall apart.”
Google has notified customers of its Fi mobile virtual network operator (MVNO) service that hackers were able to access some of their information, according to TechCrunch. The tech giant said the bad actors infiltrated a third-party system used for customer support at Fi's primary network provider. While Google didn't name the provider outright, Fi relies on US Cellular and T-Mobile for connectivity. If you'll recall, the latter admitted in mid-January that hackers had been taking data from its systems since November last year.
T-Mobile said the attackers got away with the information of around 37 million postpaid and prepaid customers before it discovered and contained the issue. Back then, the carrier insisted that no passwords, payment information and social security numbers were stolen. Google Fi is saying the same thing, adding that no PINs or text message/call contents were taken, as well. The hackers only apparently had access to users' phone numbers, account status, SMS card serial numbers and some service plan information, like international roaming.
Google reportedly told most users that they didn't have to do anything and that it's still working with Fi's network provider to "identify and implement measures to secure the data on that third-party system and notify everyone potentially impacted." That said, at least one customer claimed having more serious issues than most because of the breach. They shared a part of Google's supposed email to them on Reddit, telling them that that their "mobile phone service was transferred from [their] SIM card to another SIM card" for almost two hours on January 1st.
The customer said they received password reset notifications from Outlook, their crypto wallet account and two-factor authenticator Authy that day. They sent logs to 9to5Google to prove that the attackers had used their number to receive text messages that allowed them to access those accounts. Based on their Fi text history, the bad actors started resetting passwords and requesting two-factor authentication codes via SMS within one minute of transferring their SIM card. The customer was reportedly only able regain control of their accounts after turning network access on their iPhone off and back on, though it's unclear if that's what solved the issue. We've reached out to Google for a statement regarding the customers' SIM swapping claim and will update this post when we hear back.
Starlink raised its prices this spring, and now it's increasing the costs for its most demanding users. As The Vergereports, the SpaceX-run satellite internet provider is instituting a 1TB "Priority Access" monthly cap for data use between 7AM and 11PM beginning in December. Cross that limit and you'll spend the rest of the month relegated to "Basic Access" that, like with some phone carriers, deprioritizes your data when the network is busy. You might not notice much of a difference in typical situations, but this won't thrill you if you depend on sustained performance.
Service can get expensive if you insist on full performance around the clock. You'll pay 25 cents per gigabyte of priority data. As Reddit user Nibbloid pointed out, the math doesn't quite add up. It will cost you another $250 to get an extra 1TB of data — it would be cheaper to add a second subscription, at least if you don't mind the cost of an extra terminal. RV, Portability and "Best Effort" users also don't have any Priority Access.
Other users face tougher restrictions. Fixed business service has peak-hour caps ranging from 500GB to 3TB, with extra full-speed data costing $1 per gigabyte. Mobility users have no Priority Access for recreational use, while commercial and Premium/Maritime users have respective 1TB and 5TB caps. Those higher-end users will pay $2 for every gigabyte of priority data they need.
The justifications will sound familiar if you've dealt with data caps from Comcast and other land-based internet providers. Starlink maintains that it has to balance supply with demand to provide fast service to the "greatest number of people." This is ostensibly to keep usage in check on a "finite resource."
The decision to cap users comes as SpaceX has called for government help to fund Starlink service in Ukraine at a claimed cost of nearly $400 million per year. While Musk has said SpaceX will continue to pay regardless of assistance, it's clear the company is worried about expenses as demand increases.
Google Fiber's sudden revival will include a dramatic boost to internet speeds. Google has revealed that it will offer 5Gbps and 8Gbps plans in early 2023 at respective monthly rates of $125 and $150. Both tiers will include symmetric upload and download rates, a WiFi 6 router and up to two mesh network extenders. The upgrades should help with massive file transfers while keeping lag and jittering to a bare minimum, according to the company.
Current customers, particularly in Kansas City, Utah and West Des Moines, can try the speedier plans as soon as November if they sign up to become "trusted testers." If you're eligible, Google will ask you how you expect to use the extra bandwidth.
This is a big jump from the previous-best 2Gbps service Google introduced in 2020, and could make a big difference if you're a gamer or thrive on cloud computing. If a 150GB Microsoft Flight Simulator download takes 11 minutes at 2Gbps, the 8Gbps plan could cut that wait to less than three minutes in ideal conditions. It certainly makes typical cable internet plans seem expensive. Comcast is already offering 6Gbps service in some areas, for instance, but that costs $300 per month on contract and doesn't yet include symmetric uploads.
Either way, the new plans represent a declaration of intent. Alongside the first network expansions in five years, the upgraded speeds suggest Google is getting back to Fiber's roots. That is, it's both raising expectations for truly fast internet access and (to a degree) spurring competition among incumbent providers. This could help Google pitch its other services, of course, but you might not mind if it gives telecoms an extra incentive to roll out '10G' and similar upgrades sooner than they might have otherwise.
Starlink satellite internet access has already spread to boats and RVs, and now it might accompany your child on the way home from class. SpaceX told the FCC in a filing that it's piloting Starlink aboard school buses in the rural US. The project would keep students connected during lengthy rides (over an hour in the pilot), ensuring they can complete internet-related homework in a timely fashion even if broadband is slow or non-existent at home.
The spaceflight company simultaneously backed FCC chair Jessica Rosenworcel's May proposal to bring WiFi to school buses, and said it supported the regulator's efforts to fund school and library internet access through the E-Rate program. To no one's surprise, SpaceX felt it had the best solution thanks to rapid satellite deployment, portable dishes and fast service for the "most remote" areas.
We've asked the FCC and SpaceX for comment, and will let you know if they respond. The pitch comes just two months after the FCC cleared the use of Starlink in vehicles, noting that it would serve the "public interest" to keep people online while on the move. The concept isn't new — Google outfitted school buses with WiFi in 2018 following tests, for example.
There's no guarantee the FCC will embrace SpaceX and fund bus-based Starlink service. The Commission rejected SpaceX's request for $885.5 million in help through the Rural Digital Opportunity Fund, and the firm responded by blasting the rejection as "grossly unfair" and allegedly unsupported by evidence. Satellite internet service theoretically offers more consistent rural coverage than cellular data, though, and Starlink competitors like Amazon's Project Kuiper have yet to deploy in earnest.
In August, LastPass had admitted that an "unauthorized party" gained entry into its system. Any news about a password manager getting hacked can be alarming, but the company is now reassuring its users that their logins and other information weren't compromised in the event.
In his latest update about the incident, LastPass CEO Karim Toubba said that the company's investigation with cybersecurity firm Mandiant has revealed that the bad actor had internal access to its systems for four days. They were able to steal some of the password manager's source code and technical information, but their access was limited to the service's development environment that isn't connected to customers' data and encrypted vaults. Further, Toubba pointed out that LastPass has no access to users' master passwords, which are needed to decrypt their vaults.
The CEO said there's no evidence that this incident "involved any access to customer data or encrypted password vaults." They also found no evidence of unauthorized access beyond those four days and of any traces that the hacker injected the systems with malicious code. Toubba explained that the bad actor was able to infiltrate the service's systems by compromising a developer's endpoint. The hacker then impersonated the developer "once the developer had successfully authenticated using multi-factor authentication."
Back in 2015, LastPass suffered a security breach that compromised users' email addresses, authentication hashes, password reminders and other information. A similar breach would be more devastating today, now that the service supposedly has over 33 million registered customers. While, LastPass isn't asking users to do anything to keep their data safe this time, it's always good practice not to reuse passwords and to switch on multi-factor authentication.
Microsoft Teams stores authentication tokens in unencrypted plaintext mode, allowing attackers to potentially control communications within an organization, according to the security firm Vectra. The flaw affects the desktop app for Windows, Mac and Linux built using Microsoft's Electron framework. Microsoft is aware of the issue but said it has no plans for a fix anytime soon, since an exploit would also require network access.
According to Vectra, a hacker with local or remote system access could steal the credentials for any Teams user currently online, then impersonate them even when they're offline. They could also pretend to be the user through apps associated with Teams, like Skype or Outlook, while bypassing the multifactor authentication (MFA) usually required.
"This enables attackers to modify SharePoint files, Outlook mail and calendars, and Teams chat files," Vectra security architect Connor Peoples wrote. "Even more damaging, attackers can tamper with legitimate communications within an organization by selectively destroying, exfiltrating, or engaging in targeted phishing attacks."
Attackers can tamper with legitimate communications within an organization by selectively destroying, exfiltrating, or engaging in targeted phishing attacks.
Vectra created a proof-of-concept exploit that allowed them to send a message to the account of the credential holder via an access token. "Assuming full control of critical seats–like a company’s Head of Engineering, CEO, or CFO — attackers can convince users to perform tasks damaging to the organization."
The problem is mainly limited to the desktop app, because the Electron framework (that essentially creates a web app port) has "no additional security controls to protect cookie data," unlike modern web browsers. As such, Vectra recommends not using the desktop app until a patch is created, and using the web application instead.
When informed by cybersecurity news site Dark Reading of the vulnerability, Microsoft said it "does not meet our bar for immediate servicing as it requires an attacker to first gain access to a target network," adding that it would consider addressing it in a future product release.
However, threat hunter John Bambenek told Dark Reading it could provide a secondary means for "lateral movement" in the event of a network breach. He also noted that Microsoft is moving toward Progressive Web Apps that "would mitigate many of the concerns currently brought by Electron."
The trend of our gadgets and infrastructure constantly, often invasively, monitoring their users shows little sign of slowing — not when there's so much money to be made. Of course it hasn't been all bad for humanity, what with AI's help in advancing medical, communications and logistics tech in recent years. In his new book, Machines Behaving Badly: The Morality of AI, Scientia Professor of Artificial Intelligence at the University of New South Wales, Dr. Toby Walsh, explores the duality of potential that artificial intelligence/machine learning systems offer and, in the excerpt below, how to claw back a bit of your privacy from an industry built for omniscience.
The Second Law of Thermodynamics states that the total entropy of a system – the amount of disorder – only ever increases. In other words, the amount of order only ever decreases. Privacy is similar to entropy. Privacy is only ever decreasing. Privacy is not something you can take back. I cannot take back from you the knowledge that I sing Abba songs badly in the shower. Just as you can’t take back from me the fact that I found out about how you vote.
There are different forms of privacy. There’s our digital online privacy, all the information about our lives in cyberspace. You might think our digital privacy is already lost. We have given too much of it to companies like Facebook and Google. Then there’s our analogue offline privacy, all the information about our lives in the physical world. Is there hope that we’ll keep hold of our analogue privacy?
The problem is that we are connecting ourselves, our homes and our workplaces to lots of internet-enabled devices: smartwatches, smart light bulbs, toasters, fridges, weighing scales, running machines, doorbells and front door locks. And all these devices are interconnected, carefully recording everything we do. Our location. Our heartbeat. Our blood pressure. Our weight. The smile or frown on our face. Our food intake. Our visits to the toilet. Our workouts.
These devices will monitor us 24/7, and companies like Google and Amazon will collate all this information. Why do you think Google bought both Nest and Fitbit recently? And why do you think Amazon acquired two smart home companies, Ring and Blink Home, and built their own smartwatch? They’re in an arms race to know us better.
The benefits to the companies our obvious. The more they know about us, the more they can target us with adverts and products. There’s one of Amazon’s famous ‘flywheels’ in this. Many of the products they will sell us will collect more data on us. And that data will help target us to make more purchases.
The benefits to us are also obvious. All this health data can help make us live healthier. And our longer lives will be easier, as lights switch on when we enter a room, and thermostats move automatically to our preferred temperature. The better these companies know us, the better their recommendations will be. They’ll recommend only movies we want to watch, songs we want to listen to and products we want to buy.
But there are also many potential pitfalls. What if your health insurance premiums increase every time you miss a gym class? Or your fridge orders too much comfort food? Or your employer sacks you because your smartwatch reveals you took too many toilet breaks?
With our digital selves, we can pretend to be someone that we are not. We can lie about our preferences. We can connect anonymously with VPNs and fake email accounts. But it is much harder to lie about your analogue self. We have little control over how fast our heart beats or how widely the pupils of our eyes dilate.
We’ve already seen political parties manipulate how we vote based on our digital footprint. What more could they do if they really understood how we respond physically to their messages? Imagine a political party that could access everyone’s heartbeat and blood pressure. Even George Orwell didn’t go that far.
Worse still, we are giving this analogue data to private companies that are not very good at sharing their profits with us. When you send your saliva off to 23AndMe for genetic testing, you are giving them access to the core of who you are, your DNA. If 23AndMe happens to use your DNA to develop a cure for a rare genetic disease that you possess, you will probably have to pay for that cure. The 23AndMe terms and conditions make this very clear:
You understand that by providing any sample, having your Genetic Information processed, accessing your Genetic Information, or providing Self-Reported Information, you acquire no rights in any research or commercial products that may be developed by 23andMe or its collaborating partners. You specifically understand that you will not receive compensation for any research or commercial products that include or result from your Genetic Information or Self-Reported Information.
A Private Future
How, then, might we put safeguards in place to preserve our privacy in an AI-enabled world? I have a couple of simple fixes. Some regulatory and could be implemented today. Others are technological and are something for the future, when we have AI that is smarter and more capable of defending our privacy.
The technology companies all have long terms of service and privacy policies. If you have lots of spare time, you can read them. Researchers at Carnegie Mellon University calculated that the average internet user would have to spend 76 work days each year just to read all the things that they have agreed to online. But what then? If you don’t like what you read, what choices do you have?
All you can do today, it seems, is log off and not use their service. You can’t demand greater privacy than the technology companies are willing to provide. If you don’t like Gmail reading your emails, you can’t use Gmail. Worse than that, you’d better not email anyone with a Gmail account, as Google will read any emails that go through the Gmail system.
So here’s a simple alternative. All digital services must provide four changeable levels of privacy.
Level 1: They keep no information about you beyond your username, email and password.
Level 2: They keep information on you to provide you with a better service, but they do not share this information with anyone.
Level 3: They keep information on you that they may share with sister companies.
Level 4: They consider the information that they collect on you as public.
And you can change the level of privacy with one click from the settings page. And any changes are retrospective, so if you select Level 1 privacy, the company must delete all information they currently have on you, beyond your username, email and password. In addition, there’s a requirement that all data beyond Level 1 privacy is deleted after three years unless you opt in explicitly for it to be kept. Think of this as a digital right to be forgotten.
I grew up in the 1970s and 1980s. My many youthful transgressions have, thankfully, been lost in the mists of time. They will not haunt me when I apply for a new job or run for political office. I fear, however, for young people today, whose every post on social media is archived and waiting to be printed off by some prospective employer or political opponent. This is one reason why we need a digital right to be forgotten.
More friction may help. Ironically, the internet was invented to remove frictions – in particular, to make it easier to share data and communicate more quickly and effortlessly. I’m starting to think, however, that this lack of friction is the cause of many problems. Our physical highways have speed and other restrictions. Perhaps the internet highway needs a few more limitations too?
One such problem is described in a famous cartoon: ‘On the internet, no one knows you’re a dog.’ If we introduced instead a friction by insisting on identity checks, then certain issues around anonymity and trust might go away. Similarly, resharing restrictions on social media might help prevent the distribution of fake news. And profanity filters might help prevent posting content that inflames.
On the other side, other parts of the internet might benefit from fewer frictions. Why is it that Facebook can get away with behaving badly with our data? One of the problems here is there’s no real alternative. If you’ve had enough of Facebook’s bad behaviour and log off – as I did some years back – then it is you who will suffer most. You can’t take all your data, your social network, your posts, your photos to some rival social media service. There is no real competition. Facebook is a walled garden, holding onto your data and setting the rules. We need to open that data up and thereby permit true competition.
For far too long the tech industry has been given too many freedoms. Monopolies are starting to form. Bad behaviours are becoming the norm. Many internet businesses are poorly aligned with the public good.
Any new digital regulation is probably best implemented at the level of nation-states or close-knit trading blocks. In the current climate of nationalism, bodies such as the United Nations and the World Trade Organization are unlikely to reach useful consensus. The common values shared by members of such large transnational bodies are too weak to offer much protection to the consumer.
The European Union has led the way in regulating the tech sector. The General Data Protection Regulation (GDPR), and the upcoming Digital Service Act (DSA) and Digital Market Act (DMA) are good examples of Europe’s leadership in this space. A few nation-states have also started to pick up their game. The United Kingdom introduced a Google tax in 2015 to try to make tech companies pay a fair share of tax. And shortly after the terrible shootings in Christchurch, New Zealand, in 2019, the Australian government introduced legislation to fine companies up to 10 per cent of their annual revenue if they fail to take down abhorrent violent material quickly enough. Unsurprisingly, fining tech companies a significant fraction of their global annual revenue appears to get their attention.
It is easy to dismiss laws in Australia as somewhat irrelevant to multinational companies like Google. If they’re too irritating, they can just pull out of the Australian market. Google’s accountants will hardly notice the blip in their worldwide revenue. But national laws often set precedents that get applied elsewhere. Australia followed up with its own Google tax just six months after the United Kingdom. California introduced its own version of the GDPR, the California Consumer Privacy Act (CCPA), just a month after the regulation came into effect in Europe. Such knock-on effects are probably the real reason that Google has argued so vocally against Australia’s new Media Bargaining Code. They greatly fear the precedent it will set.
That leaves me with a technological fix. At some point in the future, all our devices will contain AI agents helping to connect us that can also protect our privacy. AI will move from the centre to the edge, away from the cloud and onto our devices. These AI agents will monitor the data entering and leaving our devices. They will do their best to ensure that data about us that we don’t want shared isn’t.
We are perhaps at the technological low point today. To do anything interesting, we need to send data up into the cloud, to tap into the vast computational resources that can be found there. Siri, for instance, doesn’t run on your iPhone but on Apple’s vast servers. And once your data leaves your possession, you might as well consider it public. But we can look forward to a future where AI is small enough and smart enough to run on your device itself, and your data never has to be sent anywhere.
This is the sort of AI-enabled future where technology and regulation will not simply help preserve our privacy, but even enhance it. Technical fixes can only take us so far. It is abundantly clear that we also need more regulation. For far too long the tech industry has been given too many freedoms. Monopolies are starting to form. Bad behaviours are becoming the norm. Many internet businesses are poorly aligned with the public good.
Digital regulation is probably best implemented at the level of nation-states or close-knit trading blocks. In the current climate of nationalism, bodies such as the United Nations and the World Trade Organization are unlikely to reach useful consensus. The common values shared by members of such large transnational bodies are too weak to offer much protection to the consumer.
The European Union has led the way in regulating the tech sector. The General Data Protection Regulation (GDPR), and the upcoming Digital Service Act (DSA) and Digital Market Act (DMA) are good examples of Europe’s leadership in this space. A few nation-states have also started to pick up their game. The United Kingdom introduced a Google tax in 2015 to try to make tech companies pay a fair share of tax. And shortly after the terrible shootings in Christchurch, New Zealand, in 2019, the Australian government introduced legislation to fine companies up to 10 per cent of their annual revenue if they fail to take down abhorrent violent material quickly enough. Unsurprisingly, fining tech companies a significant fraction of their global annual revenue appears to get their attention.
It is easy to dismiss laws in Australia as somewhat irrelevant to multinational companies like Google. If they’re too irritating, they can just pull out of the Australian market. Google’s accountants will hardly notice the blip in their worldwide revenue. But national laws often set precedents that get applied elsewhere. Australia followed up with its own Google tax just six months after the United Kingdom. California introduced its own version of the GDPR, the California Consumer Privacy Act (CCPA), just a month after the regulation came into effect in Europe. Such knock-on effects are probably the real reason that Google has argued so vocally against Australia’s new Media Bargaining Code. They greatly fear the precedent it will set.
Nearly nine months after Congress passed President Biden’s $1 trillion infrastructure bill, the federal government has yet to allocate any of the $42.5 billion in funding the legislation set aside for expanding broadband service in underserved communities, according to The Wall Street Journal. Under the law, the Commerce Department can’t release that money until the Federal Communications Commission (FCC) publishes new coverage maps that more accurately show homes and businesses that don’t have access to high-speed internet.
Inaccurate coverage data has long derailed efforts by the federal government to address the rural broadband divide. The previous system the FCC used to map internet availability relied on Form 477 filings from service providers. Those documents have been known for their errors and exaggerations. In 2020, Congress began requiring the FCC to collect more robust coverage data as part of the Broadband DATA Act. However, it wasn’t until early 2021 that lawmakers funded the mandate and in August of that same year that the Commission published its first updated map.
Following a contractor dispute, the FCC will publish its latest maps sometime in mid-November. Once they're available, both consumers and companies will a chance to challenge the agency’s data. As a result of that extra step, funding from the broadband plan likely won’t begin making its way to ISPs until the end of 2023, according to one analyst The Journal interviewed.
“We understand the urgency of getting broadband out there to everyone quickly,” Alan Davidson, the head of the Commerce Department unit responsible for allocating the funding, told the Journal. “We also know that we get one shot at this and we want to make sure we do it right.”