Posts with «military & defense» label

The Morning After: Drones that can charge on power lines

Battery life always limits a drone’s ability to perform tasks and get anywhere. So why not let it slurp from nearby power lines? (Well, there are reasons.) 

Researchers at the University of Southern Denmark attached a gripper system to a Tarot 650 Sport drone, which they customized with an electric quadcopter propulsion system and an autopilot module. An inductive charger pulls current from the power line, enabling it to recharge five times over two hours during tests. The benefit here is that power lines already exist (duh), but there is the real concern that a drone could damage a line and knock out electricity for thousands.

— Mat Smith

The biggest stories you might have missed

DJI’s RS4 gimbals make it easier to balance heavy cameras and accessories

Apple Vision Pro, two months later

Kobo’s new ereaders include its first with color displays

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

The owner of WordPress has bought Beeper, that brazen messaging app

It challenged Apple and lost almost immediately.

WordPress and Tumblr owner Automattic has bought Beeper, the maker of the Beeper Mini app, which challenged Apple with iMessage tricks on Android phones, late last year. Although it ultimately lost its only USP when Apple blocked the exploit — mere days later — the incident gave the DOJ more ammunition in its antitrust suit against Apple. Bloomberg reported on Tuesday that Automattic paid $125 million. It’s a lot of money, especially when Automattic already owns a messaging app, Texts. No, I hadn’t heard of it either.

Continue reading.

Starlink terminals are reportedly being used by Russian forces in Ukraine

There’s a thriving black market for satellite-based internet providers.

Reuters

According to a report by The Wall Street Journal, Russian forces in Ukraine are using Starlink satellite internet terminals to coordinate attacks in eastern Ukraine and Crimea as well as to control drones and other forms of military tech. The Starlink hardware is reaching Russian forces via a complex network of black-market sellers. After reports in February that Russian forces were using Starlink, US House Democrats demanded Musk act, noting Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink can disable individual terminals.

Continue reading.

Congress looks into blocking piracy sites in the US

The Motion Picture Association will work with politicians.

The Motion Picture Association chair and CEO Charles Rivkin has revealed a plan to make “sailing the digital seas,” so streaming or downloading pirated content, harder. Rivkin said the association is going to work with Congress to establish and enforce site-blocking legislation in the United States. He added that almost 60 countries use site-blocking as a tool against piracy.

Continue reading.

You can now lie down while using a Meta Quest 3 headset

Finally.

Shh, relax… And strap two screens to your face.

Relaaaaax.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-drones-that-can-charge-on-power-lines-111517677.html?src=rss

Starlink terminals are reportedly being used by Russian forces in Ukraine

Starlink satellite internet terminals are being widely used by Russian forces in Ukraine, according to a report by The Wall Street Journal. The publication indicates that the terminals, which were developed by Elon Musk’s SpaceX, are being used to coordinate attacks in eastern Ukraine and Crimea. Additionally, Starlink terminals can be used on the battlefield to control drones and other forms of military tech.

The terminals are reaching Russian forces via a complex network of black market sellers. This is despite the fact that Starlink devices are banned in the country. WSJ followed some of these sellers as they smuggled the terminals into Russia and even made sure deliveries got to the front lines. Reporting also indicates that some of the terminals were originally purchased on eBay.

This black market for Starlink terminals allegedly stretches beyond occupied Ukraine and into Sudan. Many of these Sudanese dealers are reselling units to the Rapid Support Forces, a paramilitary group that’s been accused of committing atrocities like ethnically motivated killings, targeted abuse of human rights activists, sexual violence and the burning of entire communities. WSJ notes that hundreds of terminals have found their way to members of the Rapid Support Forces.

Back in February, Elon Musk addressed earlier reports that Starlink terminals were being used by Russian soldiers in the war against Ukraine. “To the best of our knowledge, no Starlinks have been sold directly or indirectly to Russia,” he wrote on X. The Kremlin also denied the reports, according to Reuters. Despite these proclamations, WSJ says that “thousands of the white pizza-box-sized devices” have landed with “some American adversaries and accused war criminals.”

After those February reports, House Democrats have demanded that Musk take action, according to Business Insider, noting that Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink actually has the ability to disable individual terminals and each item includes geofencing technology that is supposed to prevent use in unauthorized countries, though it's unclear if black market sellers can get around these hurdles.

AHouse Democrats have demanded that Musk take action, ar. He took steps to limit Ukraine’s use of the technology on the grounds that the terminals were never intended for use in military conflicts. According to his biography, Musk also blocked Ukraine’s use of Starlink near Crimea early in the conflict, ending the country’s plans for an attack on Russia’s naval fleet. Mykhailo Podolyak, an advisor to Ukrainian President Volodymyr Zelensky, wrote on X that “civilians, children are being killed” as a result of Musk’s decision. He further dinged the billionaire by writing “this is the price of a cocktail of ignorance and a big ego.”

However, Musk fired back and said that Starlink was never active in the area near Crimea, so there was nothing to disable. He also said that the policy in question was decided upon before Ukraine’s planned attack on the naval fleet. Ukraine did lose access to more than 1,300 Starlink terminals in the early days of the conflict due to a payment issue. SpaceX reportedly charged Ukraine $2,500 per month to keep each unit operational, which ballooned to $3.25 million per month. This pricing aligns with the company’s high cost premium plan. It’s worth noting that SpaceX has donated more than 3,600 terminals to Ukraine.

SpaceX has yet to comment on the WSJ report regarding the blackmarket proliferation of Starlink terminals. We’ll update this post when it does.

This article originally appeared on Engadget at https://www.engadget.com/starlink-terminals-are-reportedly-being-used-by-russian-forces-in-ukraine-154832503.html?src=rss

Israel’s military reportedly used Google Photos to identify civilians in Gaza

The New York Times reports that Israel’s military intelligence has been using an experimental facial recognition program in Gaza that’s misidentified Palestinian civilians as having ties to Hamas. Google Photos allegedly plays a part in the chilling program’s implementation, although it appears not to be through any direct collaboration with the company.

The surveillance program reportedly started as a way to search for Israeli hostages in Gaza. However, as often happens with new wartime technology, the initiative was quickly expanded to “root out anyone with ties to Hamas or other militant groups,” according to The NYT. The technology is flawed, but Israeli soldiers reportedly haven’t treated it as such when detaining civilians flagged by the system.

According to intelligence officers who spoke to The NYT, the program uses tech from the private Israeli company Corsight. Headquartered in Tel Aviv, it promises its surveillance systems can accurately recognize people with less than half of their faces exposed. It can supposedly be effective even with “extreme angles, (even from drones) darkness, and poor quality.”

But an officer in Israel’s Unit 8200 learned that, in reality, it often struggled with grainy, obscured or injured faces. According to the official, Corsight’s tech included false positives and cases where an accurately identified Palestinian was incorrectly flagged as having Hamas ties.

Three Israeli officers told The NYT that its military used Google Photos to supplement Corsight’s tech. Intelligence officials allegedly uploaded data containing known persons of interest to Google’s service, allowing them to use the app’s photo search feature to flag them among its surveillance materials. One officer said Google’s ability to match partially obscured faces was superior to Corsight’s, but they continued using the latter because it was “customizable.”

Engadget emailed Google for a statement, but we haven’t heard back from them at the time of publication. We’ll update this story if we get a response.

One man erroneously detained through the surveillance program was poet Mosab Abu Toha, who told The NYT he was pulled aside at a military checkpoint in northern Gaza as his family tried to flee to Egypt. He was then allegedly handcuffed and blindfolded, and then beaten and interrogated for two days before finally being returned. He said soldiers told him before his release that his questioning (and then some) had been a “mistake.”

The Things You May Find Hidden in My Ear: Poems From Gaza scribe said he has no connection to Hamas and wasn’t aware of an Israeli facial recognition program in Gaza. However, during his detention, he said he overheard someone saying the Israeli army had used a “new technology” on the group with whom he was incarcerated.

This article originally appeared on Engadget at https://www.engadget.com/israels-military-reportedly-used-google-photos-to-identify-civilians-in-gaza-200843298.html?src=rss

Senators ask intelligence officials to declassify details about TikTok and ByteDance

As the Senate considers the bill that would force a sale or ban of TikTok, lawmakers have heard directly from intelligence officials about the alleged national security threat posed by the app. Now, two prominent senators are asking the office of the Director of National Intelligence to declassify and make public what they have shared.

“We are deeply troubled by the information and concerns raised by the intelligence community in recent classified briefings to Congress,” Democratic Senators Richard Blumenthal and Republican Senator Marsha Blackburn write. “It is critically important that the American people, especially TikTok users, understand the national security issues at stake.”

The exact nature of the intelligence community's concerns about the app has long been a source of debate. Lawmakers in the House received a similar briefing just ahead of their vote on the bill. But while the briefing seemed to bolster support for the measure, some members said they left unconvinced, with one lawmaker saying that “not a single thing that we heard … was unique to TikTok.”

According to Axios, some senators described their briefing as “shocking,” though the group isn’t exactly known for their particularly nuanced understanding of the tech industry. (Blumenthal, for example, once pressed Facebook executives on whether they would “commit to ending finsta.”) In its report, Axios says that one lawmaker “said they were told TikTok is able to spy on the microphone on users' devices, track keystrokes and determine what the users are doing on other apps.” That may sound alarming, but it’s also a description of the kinds of app permissions social media services have been requesting for more than a decade.

TikTok has long denied that its relationship with parent company ByteDance would enable Chinese government officials to interfere with its service or spy on Americans. And so far, there is no public evidence that TikTok has ever been used in this way. If US intelligence officials do have evidence that is more than hypothetical, it would be a major bombshell in the long-running debate surrounding the app.

This article originally appeared on Engadget at https://www.engadget.com/senators-ask-intelligence-officials-to-declassify-details-about-tiktok-and-bytedance-180655697.html?src=rss

The Pentagon used Project Maven-developed AI to identify air strike targets

The US military has ramped up its use of artificial intelligence tools after the October 7 Hamas attacks on Israel, based on a new report by Bloomberg. Schuyler Moore, US Central Command's chief technology officer, told the news organization that machine learning algorithms helped the Pentagon identify targets for more than 85 air strikes in the Middle East this month. 

US bombers and fighter aircraft carried out those air strikes against seven facilities in Iraq and Syria on February 2, fully destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers. The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it had then destroyed through multiple air strikes in the same month.

The machine learning algorithms used to narrow down targets were developed under Project Maven, Google's now-defunct partnership the Pentagon. To be precise, the project entailed the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag images for further human review. It caused an uproar among Google employees: Thousands had petitioned the company to end its partnership with Pentagon, and some even quit over its involvement altogether. A few months after that employee protest, Google decided not to renew its contract, which had ended in 2019. 

Moore told Bloomberg that US forces in the Middle East haven't stopped experimenting with the use of algorithms to identify potential targets using drone or satellite imagery even after Google ended its involvement. The military has been testing out their use over the past year in digital exercises, she said, but it started using targeting algorithms in actual operations after the October 7 Hamas attacks. She clarified, however, that human workers constantly checked and verified the AI systems' target recommendations. Human personnel were also the ones who proposed how to stage the attacks and which weapons to use. "There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."

This article originally appeared on Engadget at https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html?src=rss

OpenAI's policy no longer explicitly bans the use of its technology for 'military and warfare'

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway

Third parties selling our personal data is annoying. But for certain sensitive populations like military service members, the selling of that information could quickly become a national security threat. Researchers at Duke University released a study on Monday tracking what measures data brokers have in place to prevent unidentified or potentially malign actors from buying personal data on members of the military. As it turns out, the answer is often few to none — even when the purchaser is actively posing as a foreign agent.

A 2021 Duke study by the same lead researcher revealed that data brokers advertised that they had access to — and were more than happy to sell —information on US military personnel. In this more recent study researchers used wiped computers, VPNs, burner phones bought with cash and other means of identity obfuscation to go undercover. They scraped the websites of data brokers to see which were likely to have available data on servicemembers. Then they attempted to make those purchases, posing as two entities: datamarketresearch.org and dataanalytics.asia. With little-or-no vetting, several of the brokers transferred the requested data not only to the presumptively Chicago-based datamarketresearch, but also to the server of the .asia domain which was located in Singapore. The records only cost between 12 to 32 cents a piece.

The sensitive information included health records and financial information. Location data was also available, although the team at Duke decided not to purchase that — though it's not clear if this was for financial or ethical reasons. “Access to this data could be used by foreign and malicious actors to target active-duty military personnel, veterans, and their families and acquaintances for profiling, blackmail, targeting with information campaigns, and more,” the report cautions. At an individual level, this could also include identity theft or fraud.

This gaping hole in our national security apparatus is due in large part to the absence of comprehensive federal regulations governing either individual data privacy, or much of the business practices engaged in by data brokers. Senators Elizabeth Warren, Bill Cassidy and Marco Rubio introduced the Protecting Military Service Members' Data Act in 2022 to give power to the Federal Trade Commission to prevent data brokers from selling military personnel information to adversarial nations. They reintroduced the bill in March 2023 after it stalled out. Despite bipartisan support, it still hasn’t made it past the introduction phase.

This article originally appeared on Engadget at https://www.engadget.com/researchers-posed-as-foreign-actors-and-data-brokers-sold-them-information-on-military-servicemembers-anyway-120038192.html?src=rss

The NSA has a new security center specifically for guarding against AI

The National Security Agency (NSA) is starting a dedicated artificial intelligence security center, as reported by AP. This move comes after the government has begun to increasingly rely on AI, integrating multiple algorithms into defense and intelligence systems. The security center will work to protect these systems from theft and sabotage, in addition to safeguarding the country from external AI-based threats.

The NSA’s recent move toward AI security was announced Thursday by outgoing director General Paul Nakasone. He says that the division will operate underneath the umbrella of the pre-existing Cybersecurity Collaboration Center. This entity works with private industry and international partners to protect the US from cyberattacks stemming from China, Russia and other countries with active malware and hacking campaigns.

For instance, the agency issued an advisory this week suggesting that Chinese hackers have been targeting government, industrial and telecommunications outfits via hacked router firmware. There’s also the specter of election interference, though Nakasone says he’s yet to see any evidence of Russia or China trying to influence the 2024 US presidential election. Still, this has been a big problem in the past, and that was before the rapid proliferation of AI algorithms like the CIA’s recently-announced chatbot.

As artificial intelligence threatens to boost the abilities of these bad actors, the US government will look to this new security division to keep up. The NSA decided on establishing the unit after conducting a study that suggested poorly-secured AI models pose a significant national security challenge. This has only been compounded by the increase of generative AI technologies that the NSA points out can be used for both good and bad purposes.

Nakasone says the organization will become “NSA’s focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks” for both AI security and for the goal of secure development and adoption of artificial intelligence within “our national security systems and our defense industrial base.” To that end, the group will work hand-in-hand with industry leaders, science labs, academic institutions, international partners and, of course, the Department of Defense.

Nakasone is on his way out of the NSA and the US Cyber Command and he’ll be succeeded by his current deputy, Air Force Lt. Gen. Timothy Haugh. Nakasone has been at his post since 2018 and, by all accounts, has had quite a successful run of it.

This article originally appeared on Engadget at https://www.engadget.com/the-nsa-has-a-new-security-center-specifically-for-guarding-against-ai-180354146.html?src=rss

SpaceX lands US Space Force contract for Starshield satellite communications

SpaceX has won a $70 million contract with the US Space Force to provide satellite communications for the US Space Force via its Starshield program, Bloomberg reported. The company will effectively be repurposing its Starlink network for military usage as a way to provide a "secured satellite network for government entities," according to SpaceX's website. The contract has a one-year duration. 

"The SpaceX contract provides for Starshield end-to-end service (via the Starlink constellation), user terminals, ancillary equipment, network management and other related services," a Space Force spokesperson told CNBC in a statement. The initial phase requires the Space Force to pay $15 million to SpaceX by September 30th, and SpaceX will support 54 military "mission partners" across Department of Defence (DoD) branches. 

A group of US senators recently criticized SpaceX's actions in Ukraine, after a biography on Elon Musk revealed that he refused Ukraine's request to extend Starlink coverage to allow a naval attack on Russian-held Crimea. "We are deeply concerned with the ability and willingness of SpaceX to interrupt their service at Mr. Musk’s whim and for the purpose of handcuffing a sovereign country’s self-defense, effectively defending Russian interests," they wrote.

However in a post on his social network X, Musk refuted that sentiment. "Starlink needs to be a civilian network, not a participant to combat. Starshield will be owned by the US government and controlled by DoD Space Force," he said. 

SpaceX is already a key contractor for the Pentagon, providing the military with rocket launches. Last year, the Space Force approved the company's reusable Falcon Heavy to carry US spy satellites into orbit. Earlier this year, SpaceX won a contract to provide an unspecified number of Starlink ground terminals for use in Ukraine. 

This article originally appeared on Engadget at https://www.engadget.com/spacex-lands-us-space-force-contract-for-starshield-satellite-communications-085045883.html?src=rss

Ukrainian official claims Elon Musk cost lives by refusing Starlink access during a drone operation

Excerpts from Walter Isaacson's Elon Musk biography are coming to light ahead of its release next week, revealing some new details about the billionaire's decision to provide Ukraine with Starlink access amid the country's war with Russia. According to an excerpt CNN reported on, Musk allegedly told SpaceX workers to shut down Starlink access close to the Crimea coast to prevent a Ukrainian drone attack on Russia's naval fleet.

Musk, who has reportedly been in contact with Russian officials including President Vladimir Putin, is said to have been worried that the attack would lead to Russia retaliating with nuclear weapons. Ukrainian leaders seemingly begged Musk to reactivate Starlink access but drones that were approaching Russian warships “lost connectivity and washed ashore harmlessly,” CNN cites Isaacson as stating.

Musk's alleged actions have had significant consequences for Ukraine, according to Mykhailo Podolyak, an advisor to President Volodymyr Zelensky. Podolyak wrote on X (the platform formerly known as Twitter that Musk owns) that in preventing drones from attacking the Russian ships, Musk enabled them to fire missiles at Ukrainian cities. "As a result, civilians, children are being killed," Podolyak claimed. "This is the price of a cocktail of ignorance and big ego."

Sometimes a mistake is much more than just a mistake. By not allowing Ukrainian drones to destroy part of the Russian military (!) fleet via #Starlink interference, @elonmusk allowed this fleet to fire Kalibr missiles at Ukrainian cities. As a result, civilians, children are…

— Михайло Подоляк (@Podolyak_M) September 7, 2023

According to Musk, however, Starlink was not active in those areas and so SpaceX had nothing to disable. “There was an emergency request from government authorities to activate Starlink all the way to Sevastopol. The obvious intent being to sink most of the Russian fleet at anchor," he wrote on X. "If I had agreed to their request, then SpaceX would be explicitly complicit in a major act of war and conflict escalation.”

There was an emergency request from government authorities to activate Starlink all the way to Sevastopol.

The obvious intent being to sink most of the Russian fleet at anchor.

If I had agreed to their request, then SpaceX would be explicitly complicit in a major act of war and…

— Elon Musk (@elonmusk) September 7, 2023

Regardless of how he's framing the situation, Musk has admitted to making another decision that has impacted the Ukraine-Russia conflict in one way or another. A report late last year indicated that around 1,300 Starlink terminals Ukraine was using temporarily went offline due to a dispute over payments for the internet service.

This article originally appeared on Engadget at https://www.engadget.com/ukrainian-official-claims-elon-musk-cost-lives-by-refusing-starlink-access-during-a-drone-operation-165926481.html?src=rss