The new subscription plan revolves around “generative credits” (GCs), which Adobe defines as, “tokens that enable customers to turn a text-based prompt into image and vector creations in Photoshop, Illustrator, Express and the Firefly web application.” It’s a made up currency that facilitates the transmutation of your money into faster access to the Firefly AI. Once users hit their monthly allowance of GCs, they’ll be able to continue using Firefly, just at a slower rate.
The web application will be available through Creative Cloud, at the Express and Express Premium price points. Those users will also gain access to the full paid version of Express Premium. Per a company release. Adobe Express is a new “AI first, all-in-one creativity app” designed specifically to generate commercially safe images and effects (and presumably the correct number of fingers). With it users can generate design elements, images and video, pdfs and animations in over a 100 languages, then export that content to social media and publishing platforms.For enterprise users, Firefly and Express Premium will be bundled together as an all-in-one editor.
To help allay those well-founded fears, Firefly embeds Content Credentials by default in all generated works. These credentials act as a as a digital “nutrition label,” displaying the asset’s name, creation date, creation tool and a log of any edits made to it. They’re the latest measure to come out of Adobe’s Content Authenticity Initiative, an industry group seeking to establish baseline ethical and transparency norms for AI development before the Feds step in and impose real regulations.
This article originally appeared on Engadget at https://www.engadget.com/adobes-firefly-ai-is-now-commercially-available-on-photoshop-illustrator-and-express-130049419.html?src=rss
Apple held its annual iPhone event on Tuesday, dubbed "Wonderlust" for 2023. Alongside the new iPhone 15 and next generation Apple Watch, the company announced that iOS 17, which the new generation of devices will use, will be available as a free software update on Monday, September 18.
The public preview for iOS 17 has been available for download since June and has already shown off a number of design refinements including user-definable outgoing call screens so you can pick what people see when you call them and an option to send incoming calls to voicemail and read a transcription of them instead. Users will also be able to send quick voice or video messages if someone doesn't pick up their Facetime calls as well as mute/unmute themselves through their AirPods. The system's spellcheck has also been improved for fewer "ducking" autocorrects.
This is a developing story. Please check back for updates.
This article originally appeared on Engadget at https://www.engadget.com/ios-17-will-be-available-as-a-free-update-on-september-18-183529945.html?src=rss
AI chatbots are coming to your Salesforce applications and it looks like it'll all of them. Company executives had a lot to show off during Tuesday's Dreamforce 2023 keynote address, including major updates to both its Einstein AI and Data Cloud services.
Einstein AI has received a slew of updates and upgrades since we saw it integrated with Slack back in May. The new Copilot service will take the existing AI chatbot and tune it to a client company's specific datasets using their Salesforce Data Cloud data. This enables the Einstein AI to provide better, more relevant and more actionable answers to employees' natural language questions and requests.
"Copilot is a conversational AI assistant for both companies and employees to securely and safely access generative AI to do their jobs better, faster and more easily," Salesforce CEO of AI, Clara Chi, said during a press call monday. "It's going to be available to every Salesforce user across every cloud."
The new Copilot Studio takes that tuning process a step further, allowing customers to "customize Einstein Copilot with specific prompts, skills, and AI models," per a Monday release. This more tightly structures Einstein's behaviors without constricting its generative capabilities. What's more, Salesforce executives announced that Copilot will be available across a variety of mobile platforms, including "real-time chat, Slack, WhatsApp or SMS."
"We think that there is an incredible opportunity in AI," Patrick Stokes, Salesforce EVP and GM of Platform, said during the press call. "We think that it is creating jobs, we think that it is driving productivity across organizations... we also think that as customers and businesses are driving towards these AI strategies, they may not have the platform that they really want or that they really need."
He notes that much of their customers' data is fractured and split among different applications, data lakes, APIs and vendors. "This is all leading to low productivity, and what they really want, is one connected platform or one that will connect their data," Stokes continued. To address that need, Salesforce also announced that it is integrating the chatbot with its Data Cloud service to create a one-stop platform for building low-code AI-powered CRM applications. Salesforce calls it the Einstein 1 Platform.
"All of these fields coming together from different systems that speak different languages... now speak one language on the platform," Chi said. "Any data from any system can now be used like any other object or field in Salesforce."
One of Salesforce's first big innovations was its metadata framework a system that describes the relationship between, and behaviors of, individual pieces of a company's data. That metadata framework is also an ideal medium for training machine learning models to better understand customer interactions and business operations, thereby improving and refining their performance.
"Much of Salesforce is built on this metadata framework — from our platform to analytics, commerce, sales service and marketing," Stokes said. "Now our Data Cloud and Einstein are really giving you one platform where you can build all of your customer experience in one place with all of the data and AI that you need."
To minimize the rate of hallucination and false responses by the AI, Salesforce has developed the "Einstein trust layer" which we first saw roll out to the company's CRM applications in March. The trust layer both secures data retrieval from the cloud and masks any sensitive or proprietary information before passing it on to the language model with another round of toxicity checks after that.
The company does not deny that this new generation of generative AI can and likely will lead to job losses, such as coders whose services will be replaced by Einstein 1, but remains confident that there is reason for optimism. "I think it is a it's a big moment in time and there will certainly be impact a certain jobs," Chi admitted. "There's also certainly going to be a new jobs that are being created such as prompt engineer." Oh boy, a prompt engineer, the career every kid dreams of.
This article originally appeared on Engadget at https://www.engadget.com/the-next-generation-einstein-ai-will-put-a-chatbot-in-every-salesforce-application-120004305.html?src=rss
You didn't actually believe all those founder's myths about tech billionaires like Bezos, Jobs and Musk pulling themselves up by their bootstraps from some suburban American garage, did you? In reality, our corporate kings have been running the same playbook since the 18th century when Lancashire's own Richard Arkwright wrote it. Arkwright is credited with developing a means of forming cotton fully into thread — technically he didn't actually invent or design the machine, but developed the overarching system in which it could be run at scale — and spinning that success into financial fortune. Never mind the fact that his 24-hour production lines were operated by boys as young as seven pulling 13-hour shifts.
InBlood in the Machine: The Origins of the Rebellion Against Big Tech— one of the best books I've read this year — LA Times tech reporter Brian Merchant lays bare the inhumane cost of capitalism wrought by the industrial revolution and celebrates the workers who stood against those first tides of automation: the Luddites.
The first tech titans were not building global information networks or commercial space rockets. They were making yarn and cloth. A lot of yarn, and a lot of cloth.
Like our modern-day titans, they started out as entrepreneurs. But until the nineteenth century, entrepreneurship was not a cultural phenomenon. Businessmen took risks, of course, and undertook novel efforts to increase their profits. Yet there was not a popular conception of the heroic entrepreneur, of the adventuring businessman, until after the birth of industrial capitalism. The term itself was popularized by Jean-Baptiste Say, in his 1803 work A Treatise on Political Economy. An admirer of Adam Smith’s, Say thought that The Wealth of Nations was missing an account of the individuals who bore the risk of starting new business; he called this figure the entrepreneur, which translated from the French as “adventurer” or “undertaker.”
For a worker, aspiring to entrepreneurship was different than merely seeking upward mobility. The standard path an ambitious, skilled weaver might pursue was to graduate from apprentice to journeyman weaver, who rented a loom or worked in a shop, to owning his own loom, to becoming a master weaver and running a small shop of his own that employed other journeymen. This was customary.
In the eighteenth and nineteenth centuries, as now in the twenty-first century, entrepreneurs saw the opportunity to use technology to disrupt longstanding customs in order to increase efficiencies, output, and personal profit. There were few opportunities for entrepreneurship without some form of automation; control of technologies of production grants its owner a chance to gain advantage or take pay or market share from others. In the past, like now, entrepreneurs started small businesses at some personal financial risk, whether by taking out a loan to purchase used handlooms and rent a small factory space, or by using inherited capital to procure a steam engine and a host of power looms.
The most ambitious entrepreneurs tapped untested technologies and novel working arrangements, and the most successful irrevocably changed the structure and nature of our daily lives, setting standards that still exist today. The least successful would go bankrupt, then as now.
In the first century of the Industrial Revolution, one entrepreneur looms above the others, and has a strong claim on the mantle of the first of what we’d call a tech titan today. Richard Arkwright was born to a middle-class tailor’s family and originally apprenticed as a barber and wigmaker. He opened a shop in the Lancashire city of Bolton in the 1760s. There, he invented a waterproof dye for the wigs that were in fashion at the time, and traveled the country collecting hair to make them. In his travels across the Midlands, he met spinners and weavers, and became familiar with the machinery they used to make cotton garments. Bolton was right in the middle of the Industrial Revolution’s cotton hub hotspot.
Arkwright took the money he made from the wigs, plus the dowry from his second marriage, and invested it in upgraded spinning machinery. “The improvement of spinning was much in the air, and many men up and down Lancashire were working at it,” Arkwright’s biographer notes. James Hargreaves had invented the spinning jenny, a machine that allowed a single worker to create eight threads of yarn simultaneously—though they were not very strong—in 1767. Working with one of his employees, John Kay, Arkwright tweaked the designs to spin much stronger threads using water or steam power. Without crediting Kay, Arkwright patented his water frame in 1769 and a carding engine in 1775, and attracted investment from wealthy hosiers in Nottingham to build out his operation. He built his famous water-powered factory in Cromford in 1771.
His real innovation was not the technology itself; several similar machines had been patented, some before his. His true innovation was creating and successfully implementing the system of modern factory work.
“Arkwright was not the great inventor, nor the technical genius,” as the Oxford economic historian Peter Mathias explains, “but he was the first man to make the new technology of massive machinery and power source work as a system — technical, organizational, commercial — and, as a proof, created the first great personal fortune and received the accolade of a knighthood in the textile industry as an industrialist.” Richard Arkwright Jr., who inherited his business, became the richest commoner in England.
Arkwright was the first start-up founder to launch a unicorn company, we might say, and the first tech entrepreneur to strike it wildly rich. He did so by marrying the emergent technologies that automated the making of yarn with a relentless new work regime. His legacy is alive today in companies like Amazon, which strive to automate as much of their operations as is financially viable, and to introduce surveillance-intensive worker-productivity programs.
Often called the grandfather of the factory, Arkwright did not invent the idea of organizing workers into strict shifts to produce goods with maximal efficiency. But he pursued the “manufactory” formation most ruthlessly, and most vividly demonstrated the practice could generate huge profits. Arkwright’s factory system, which was quickly and widely emulated, divided his hundreds of workers into two overlapping thirteen-hour shifts. A bell was rung twice a day, at 5 a.m. and 5 p.m. The gates would shut and work would start an hour later. If a worker was late, they sat the shift out, forfeiting that day’s pay. (Employers of the era touted this practice as a positive for workers; it was a more flexible schedule, they said, since employees no longer needed to “give notice” if they couldn’t work. This reasoning is reminiscent of that offered by twenty-first-century on-demand app companies.) For the first twenty-two years of its operation, the factory was worked around the clock, mostly by boys like Robert Blincoe, some as young as seven years old. At its peak, two-thirds of the 1,100-strong workforce were children. Richard Arkwright Jr. admitted in later testimony that they looked “extremely dissipated, and many of them had seldom more than a few hours of sleep,” though he maintained they were well paid.
The industrialist also built on-site housing, luring whole families from around the country to come work his frames. He gave them one week’s worth of vacation a year, “but on condition that they could not leave the village.” Today, even some of our most cutting-edge consumer products are still manufactured in similar conditions, in imposing factories with on-site dormitories and strictly regimented production processes, by workers who have left home for the job. Companies like Foxconn operate factories where the regimen can be so grueling it has led to suicide epidemics among the workforce.
The strict work schedule and a raft of rules instilled a sense of discipline among the laborers; long, miserable shifts inside the factory walls were the new standard. Previously, of course, similar work was done at home or in small shops, where shifts were not so rigid or enforced.
Arkwright’s “main difficulty,” according to the early business theorist Andrew Ure, did not “lie so much in the invention of a proper mechanism for drawing out and twisting cotton into a continuous thread, as in . . . training human beings to renounce their desultory habits of work and to identify themselves with the unvarying regularity of the complex automaton.” This was his legacy. “To devise and administer a successful code of factory discipline, suited to the necessities of factory diligence, was the Herculean enterprise, the noble achievement of Arkwright,” Ure continued. “It required, in fact, a man of a Napoleon nerve and ambition to subdue the refractory tempers of workpeople.”
Ure was hardly exaggerating, as many workers did in fact view Arkwright as akin to an invading enemy. When he opened a factory in Chorley, Lancashire, in 1779, a crowd of hundreds of cloth workers broke in, smashed the machines, and burned the place to the ground. Arkwright did not try to open another mill in Lancashire.
Arkwright also vigorously defended his patents in the legal system. He collected royalties on his water frame and carding engine until 1785, when the court decided that he had not actually invented the machines but had instead copied their parts from other inventors, and threw the patents out. By then, he was astronomically wealthy. Before he died, he would be worth £500,000, or around $425 million in today’s dollars, and his son would expand and entrench his factory empire.
The success apparently went to his head — he was considered arrogant, even among his admirers. In fact, arrogance was a key ingredient in his success: he had what Ure described as “fortitude in the face of public opposition.” He was unyielding with critics when they pointed out, say, that he was employing hundreds of children in machine-filled rooms for thirteen hours straight. That for all his innovation, the secret sauce in his groundbreaking success was labor exploitation.
In Arkwright, we see the DNA of those who would attain tech titanhood in the ensuing decades and centuries. Arkwright’s brashness rhymes with that of bullheaded modern tech executives who see virtue in a willingness to ignore regulations and push their workforces to extremes, or who, like Elon Musk, would gleefully wage war with perceived foes on Twitter rather than engage any criticism of how they run their businesses. Like Steve Jobs, who famously said, “We’ve always been shameless about stealing great ideas,” Arkwright surveyed the technologies of the day, recognized what worked and could be profitable, lifted the ideas, and then put them into action with an unmatched aggression. Like Jeff Bezos, Arkwright hyper-charged a new mode of factory work by finding ways to impose discipline and rigidity on his workers, and adapting them to the rhythms of the machine and the dictates of capital — not the other way around.
We can look back at the Industrial Revolution and lament the working conditions, but popular culture still lionizes entrepreneurs cut in the mold of Arkwright, who made a choice to employ thousands of child laborers and to institute a dehumanizing system of factory work to increase revenue and lower costs. We have acclimated to the idea that such exploitation was somehow inevitable, even natural, while casting aspersions on movements like the Luddites as being technophobic for trying to stop it. We forget that working people vehemently opposed such exploitation from the beginning.
Arkwright’s imprint feels familiar to us, in our own era where entrepreneurs loom large. So might a litany of other first-wave tech titans. Take James Watt, the inventor of the steam engine that powered countless factories in industrial England. Once he was confident in his product, much like a latter-day Bill Gates, Watts sold subscriptions for its use. With his partner, Matthew Boulton, Watts installed the engine and then collected annual payments that were structured around how much the customer would save on fuel costs compared to the previous engine. Then, like Gates, Watts would sue anyone he thought had violated his patent, effectively winning himself a monopoly on the trade. The Mises Institute, a libertarian think tank, argues that this had the effect of constraining innovation on the steam engine for thirty years.
Or take William Horsfall or William Cartwright. These were men who were less innovative than relentless in their pursuit of disrupting a previous mode of work as they strove to monopolize a market. (The word innovation, it’s worth noting, carried negative connotations until the mid-twentieth century or so; Edmund Burke famously called the French Revolution “a revolt of innovation.”) They can perhaps be seen as precursors to the likes of Travis Kalanick, the founder of Uber, the pugnacious trampler of the taxi industry. Kalanick’s business idea — that it would be convenient to hail a taxi from your smartphone — was not remarkably inventive. But he had intense levels of self-determination and pugnacity, which helped him overrun the taxi cartels and dozens of cities’ regulatory codes. His attitude was reflected in Uber’s treatment of its drivers, who, the company insists, are not employees but independent contractors, and in the endemic culture of harassment and mistreatment of the women on staff.
These are extreme examples, perhaps. But extremity is often needed to break down long-held norms, and the potential rewards are extreme, too. Like the mill bosses who shattered nineteenth-century standards and traditions by automating cloth-making, today’s start-up founders aim to disrupt one job category after another with gig work platforms or artificial intelligence, and encourage others to follow their lead. There’s a reason Arkwright and his factories were both emulated and feared. Even two centuries later, the most successful tech titans typically are.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-meet-richard-akrwright-the-worlds-first-tech-titan-205045895.html?src=rss
You didn't actually believe all those founder's myths about tech billionaires like Bezos, Jobs and Musk pulling themselves up by their bootstraps from some suburban American garage, did you? In reality, our corporate kings have been running the same playbook since the 18th century when Lancashire's own Richard Arkwright wrote it. Arkwright is credited with developing a means of forming cotton fully into thread — technically he didn't actually invent or design the machine, but developed the overarching system in which it could be run at scale — and spinning that success into financial fortune. Never mind the fact that his 24-hour production lines were operated by boys as young as seven pulling 13-hour shifts.
In Blood in the Machine: The Origins of the Rebellion Against Big Tech— one of the best books I've read this year — LA Times tech reporter Brian Merchant lays bare the inhumane cost of capitalism wrought by the industrial revolution and celebrates the workers who stood against those first tides of automation: the Luddites.
The first tech titans were not building global information networks or commercial space rockets. They were making yarn and cloth.
A lot of yarn, and a lot of cloth. Like our modern-day titans, they started out as entrepreneurs. But until the nineteenth century, entrepreneurship was not a cultural phenomenon. Businessmen took risks, of course, and undertook novel efforts to increase their profits. Yet there was not a popular conception of the heroic entrepreneur, of the adventuring businessman, until long after the birth of industrial capitalism. The term itself was popularized by Jean-Baptiste Say, in his 1803 work A Treatise on Political Economy. An admirer of Adam Smith’s, Say thought that The Wealth of Nations was missing an account of the individuals who bore the risk of starting new business; he called this figure the entrepreneur, translating it from the French as “adventurer” or “undertaker.”
For a worker, aspiring to entrepreneurship was different than merely seeking upward mobility. The standard path an ambitious, skilled weaver might pursue was to graduate from apprentice to journeyman weaver, who rented a loom or worked in a shop, to owning his own loom, to becoming a master weaver and running a small shop of his own that employed other journeymen. This was customary.
In the eighteenth and nineteenth centuries, as now in the twenty-first century, entrepreneurs saw the opportunity to use technology to disrupt longstanding customs in order to increase efficiencies, output, and personal profit. There were few opportunities for entrepreneurship without some form of automation; control of technologies of production grants its owner a chance to gain advantage or take pay or market share from others. In the past, like now, owners started small businesses at some personal financial risk, whether by taking out a loan to purchase used handlooms and rent a small factory space, or by using inherited capital to procure a steam engine and a host of power looms.
The most ambitious entrepreneurs tapped untested technologies and novel working arrangements, and the most successful irrevocably changed the structure and nature of our daily lives, setting standards that still exist today. The least successful would go bankrupt, then as now.
In the first century of the Industrial Revolution, one entrepreneur looms above the others, and has a strong claim on the mantle of the first of what we’d call a tech titan today. Richard Arkwright was born to a middle-class tailor’s family and originally apprenticed as a barber and wigmaker. He opened a shop in the Lancashire city of Bolton in the 1760s. There, he invented a waterproof dye for the wigs that were in fashion at the time, and traveled the country collecting hair to make them. In his travels across the Midlands, he met spinners and weavers, and became familiar with the machinery they used to make cotton garments. Bolton was right in the middle of the Industrial Revolution’s cotton hub hotspot.
Arkwright took the money he made from the wigs, plus the dowry from his second marriage, and invested it in upgraded spinning machinery. “The improvement of spinning was much in the air, and many men up and down Lancashire were working at it,” Arkwright’s biographer notes. James Hargreaves had invented the spinning jenny, a machine that automated the process of spinning cotton into a weft— halfway into yarn, basically— in 1767. Working with one of his employees, John Kay, Arkwright tweaked the designs to spin cotton entirely into yarn, using water or steam power. Without crediting Kay, Arkwright patented his water frame in 1769 and a carding engine in 1775, and attracted investment from wealthy hosiers in Nottingham to build out his operation. He built his famous water-powered factory in Cromford in 1771.
His real innovation was not the machinery itself; several similar machines had been patented, some before his. His true innovation was creating and successfully implementing the system of modern factory work.
“Arkwright was not the great inventor, nor the technical genius,” as the Oxford economic historian Peter Mathias explains, “but he was the first man to make the new technology of massive machinery and power source work as a system— technical, organizational, commercial— and, as a proof, created the first great personal fortune and received the accolade of a knighthood in the textile industry as an industrialist.” Richard Arkwright Jr., who inherited his business, became the richest commoner in England.
Arkwright père was the first start‑up founder to launch a unicorn company we might say, and the first tech entrepreneur to strike it wildly rich. He did so by marrying the emergent technologies that automated the making of yarn with a relentless new work regime. His legacy is alive today in companies like Amazon, which strive to automate as much of their operations as is financially viable, and to introduce highly surveilled worker-productivity programs.
Often called the grandfather of the factory, Arkwright did not invent the idea of organizing workers into strict shifts to produce goods with maximal efficiency. But he pursued the “manufactory” formation most ruthlessly, and most vividly demonstrated the practice could generate huge profits. Arkwright’s factory system, which was quickly and widely emulated, divided his hundreds of workers into two overlapping thirteen-hour shifts. A bell was rung twice a day, at 5 a.m. and 5 p.m. The gates would shut and work would start an hour later. If a worker was late, they sat the day out, forfeiting that day’s pay. (Employers of the era touted this practice as a positive for workers; it was a more flexible schedule, they said, since employees no longer needed to “give notice” if they couldn’t work. This reasoning is reminiscent of that offered by twenty-first-century on‑demand app companies.) For the first twenty-two years of its operation, the factory was worked around the clock, mostly by boys like Robert Blincoe, some as young as seven years old. At its peak, two-thirds of the 1,100-strong workforce were children. Richard Arkwright Jr. admitted in later testimony that they looked “extremely dissipated, and many of them had seldom more than a few hours of sleep,” though he maintained they were well paid.
The industrialist also built on‑site housing, luring whole families from around the country to come work his frames. He gave them one week’s worth of vacation a year, “but on condition that they could not leave the village.” Today, even our most cutting-edge consumer products are still manufactured in similar conditions, in imposing factories with on‑site dormitories and strictly regimented production processes, by workers who have left home for the job. Companies like Foxconn operate factories where the regimen can be so grueling it has led to suicide epidemics among the workforce.
The strict work schedule and a raft of rules instilled a sense of discipline among the laborers; long, miserable shifts inside the factory walls were the new standard. Previously, of course, similar work was done at home or in small shops, where shifts were not so rigid or enforced.
Arkwright’s “main difficulty,” according to the early business theorist Andrew Ure, did not “lie so much in the invention of a proper mechanism for drawing out and twisting cotton into a continuous thread, as in [. . .] training human beings to renounce their desultory habits of work and to identify themselves with the unvarying regularity of the complex automation.” This was his legacy. “To devise and administer a successful code of factory discipline, suited to the necessities of factory diligence, was the Herculean enterprise, the noble achievement of Arkwright,” Ure continued. “It required, in fact, a man of a Napoleon nerve and ambition to subdue the refractory tempers of workpeople.”
Ure was hardly exaggerating, as many workers did in fact view Arkwright as akin to an invading enemy. When he opened a factory in Chorley, Lancashire, in 1779, a crowd of stockingers and spinners broke in, smashed the machines, and burned the place to the ground. Arkwright did not try to open another mill in Lancashire.
Arkwright also vigorously defended his patents in the legal system. He collected royalties on his water frame and carding engine until 1785, when the court decided that he had not actually invented the machines but had instead copied their parts from other inventors, and threw the patents out. By then, he was astronomically wealthy. Before he died, he would be worth £500,000, or around $425 million in today’s dollars, and his son would expand and entrench his factory empire.
The success apparently went to his head— he was considered arrogant, even among his admirers. In fact, arrogance was a key ingredient in his success: he had what Ure described as “fortitude in the face of public opposition.” He was unyielding with critics when they pointed out, say, that he was employing hundreds of children in machine-filled rooms for thirteen hours straight. That for all his innovation, the secret sauce in his groundbreaking success was labor exploitation.
In Arkwright, we see the DNA of those who would attain tech titanhood in the ensuing decades and centuries. Arkwright’s brashness rhymes with that of bullheaded modern tech executives who see virtue in a willingness to ignore regulations and push their workforces to extremes, or who, like Elon Musk, would gleefully wage war with perceived foes on Twitter rather than engage any criticism of how he runs his businesses. Like Steve Jobs, who famously said, “We’ve always been shameless about stealing great ideas,” Arkwright surveyed the technologies of the day, recognized what worked and could be profitable, lifted the ideas, and then put them into action with an unmatched aggression. Like Jeff Bezos, Arkwright hypercharged a new mode of factory work by finding ways to impose discipline and rigidity on his workers, and adapting them to the rhythms of the machine and the dictates of capital— not the other way around.
We can look back at the Industrial Revolution and lament the working conditions, but popular culture still lionizes entrepreneurs cut in the mold of Arkwright, who made a choice to employ thousands of child laborers and to institute a dehumanizing system of factory work to increase revenue and lower costs. We have acclimated to the idea that such exploitation was somehow inevitable, even natural, while casting aspersions on movements like the Luddites as being technophobic for trying to stop it. We forget that working people vehemently opposed such exploitation from the beginning.
Arkwright’s imprint feels familiar to us, in our own era where entrepreneurs loom large. So might a litany of other first-wave tech titans. Take James Watt, the inventor of the steam engine that powered countless factories in industrial England. Once he was confident in his product, much like a latter-day Bill Gates, Watts sold subscriptions for its use. With his partner, Matthew Boulton, Watts installed the engine and then collected annual payments that were structured around how much the customer would save on fuel costs compared to the previous engine. Then, like Gates, Watts would sue anyone he thought had violated his patent, effectively winning himself a monopoly on the trade. The Mises Institute, a libertarian think tank, argues that this had the effect of constraining innovation on the steam engine for thirty years.
Or take William Horsfall or William Cartwright. These were men who were less innovative than relentless in their pursuit of disrupting a previous mode of work as they strove to monopolize a market. (The word innovation, it’s worth noting, carried negative connotations until the mid-twentieth century or so; Edmund Burke famously called the French Revolution “a revolt of innovation.”) They can perhaps be seen as precursors to the likes of Travis Kalanick, the founder of Uber, the pugnacious trampler of the taxi industry. Kalanick’s business idea— that it would be convenient to hail a taxi from your smartphone— was not remarkably inventive. But he had intense levels of self-determination and pugnacity, which helped him overrun the taxi cartels and dozens of cities’ regulatory codes. His attitude was reflected in Uber’s treatment of its drivers, who, the company insists, are not employees but independent contractors, and in the endemic culture of harassment and mistreatment of the women on staff.
These are extreme examples, perhaps. But to disrupt long-held norms for the promise of extreme rewards, entrepreneurs often pursue extreme actions. Like the mill bosses who shattered 19th-century standards by automating cloth-making, today’s start‑up founders aim to disrupt one job category after another with gig work platforms or artificial intelligence, and encourage others to follow their lead. There’s a reason Arkwright and his factories were both emulated and feared. Even two centuries later, many tech titans still are.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-blood-in-the-machine-brian-merchant-hachette-book-group-143056410.html?src=rss
SpaceX's Starship test launch in April will be its last for the foreseeable future. The FAA announced Friday that it has closed its investigation into April's mishap, but that the company will not be allowed to resume test launches until it addresses a list of 63 "corrective actions" for its launch system.
"The vehicle’s structural margins appear to be better than we expected," SpaceX CEO and mascot Elon Musk joked with reporters in the wake of the late April test launch. Per the a report from the US Fish and WIldlife Service, however, the failed launch resulted in a 385-acre debris field that saw concrete chunks flung more than 2,600 feet from the launchpad, a 3.5-acre wildfire and "a plume cloud of pulverized concrete that deposited material up to 6.5 miles northwest of the pad site.”
"Corrective actions include redesigns of vehicle hardware to prevent leaks and fires, redesign of the launch pad to increase its robustness, incorporation of additional reviews in the design process, additional analysis and testing of safety critical systems and components including the Autonomous Flight Safety System, and the application of additional change control practices," the FAA release reads. Furthermore, the FAA says that SpaceX will have to not only complete that list but also apply for and receive a modification to its existing license "that addresses all safety, environmental and other applicable regulatory requirements prior to the next Starship launch." In short, SpaceX has reached the "finding out" part.
SpaceX released a blog post shortly after the FAA's announcement was made public, obliquely addressing the issue. "Starship’s first flight test provided numerous lessons learned," the post reads, crediting its "rapid iterative development approach" with both helping develop all of SpaceX's vehicles to this point and "directly contributing to several upgrades being made to both the vehicle and ground infrastructure."
The company admitted that its Autonomous Flight Safety System (AFSS), which is designed to self-destruct a rocket when it goes off its flightpath but before it hits the ground, suffered "an unexpected delay" — that lasted 40 seconds. SpaceX did not elaborate on what cause, if any, it found for the fault but has reportedly since "enhanced and requalified the AFSS to improve system reliability."
"SpaceX is also implementing a full suite of system performance upgrades unrelated to any issues observed during the first flight test," the blog reads. Those improvements include a new hot-stage separation system which will more effectively decouple the first and second stages, a new electronic "Thrust Vector Control (TVC) system" for its Raptor heavy rockets, and "significant upgrades" to the orbital launch mount and pad system which just so happened to have failed in the first test but is, again, completely unrelated to this upgrade. Whether those improvements overlap with the 63 that the FAA is imposing, could not be confirmed at the time of publication as the FAA had not publically released them.
This article originally appeared on Engadget at https://www.engadget.com/faa-grounds-starship-until-spacex-takes-63-corrective-actions-174825385.html?src=rss
AI-generated images and audio are already making their way into the 2024 Presidential election cycle. In an effort to staunch the flow of disinformation ahead of what is expected to be a contentious election, Google announced on Wednesday that it will require political advertisers to "prominently disclose" whenever their advertisement contains AI-altered or -generated aspects, "inclusive of AI tools." The new rules will based on the company's existing Manipulated Media Policy and will take effect in November.
“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” a Google spokesperson said in a statement obtained by The Hill. Small and inconsequential edits like resizing images, minor cleanup to the background or color correction will all still be allowed — those that depict people or things doing stuff that they never actually did or those that otherwise alter actual footage will be flagged.
Those ads that do utilize AI aspects will need to label them as such in a "clear and conspicuous" manner that is easily seen by the user, per the Google policy. The ads will be moderated first through Google's own automated screening systems and then reviewed by a human as needed.
Google's actions run counter to other companies in social media. X/Twitter recently announced that it reversed its previous position and will allow political ads on the site, while Meta continues to take heat for its own lackadaisical ad moderation efforts.
The Federal Election Commission is also beginning to weigh in on the issue. LAst month it sought public comment on amending a standing regulation "that prohibits a candidate or their agent from fraudulently misrepresenting other candidates or political parties" to clarify that the "related statutory prohibition applies to deliberately deceptive Artificial Intelligence campaign advertisements" as well.
This article originally appeared on Engadget at https://www.engadget.com/google-will-require-political-ads-prominently-disclose-their-ai-generated-aspects-232906353.html?src=rss
Gannett operates a number of regional and national publications including USA Today, The Arizona Republic and The Detroit Free Press. The company devised its "Lede AI" as a means of automating the droll work of summarizing the box scores of local high school sports leagues — a task the AI proved wholly incapable of. One such article read:
The Hardin County Tigers defeated the Memphis Business Execs 48-12 in a Tennessee high school football game on Friday. Hardin County scored early and often to roll over Memphis Business 48-12 in a Tennessee high school football matchup.
"High school reporting is different from covering college or professional sports," On anonymous Gannett sports writer told Yahoo News. "And high school reporting can go underappreciated, but it's extremely important. You're covering a community."
"You're not writing for as big of an audience, but you're writing for a very, very specific one," they added. "Family members — uncles, parents, people who care that your story has their kids' names. They're looking for keepsakes, things they can remember from their kids' high school career."
In response to the criticism, Gannett has elected to "pause" its use of the AI for the time being though the company made no mention of abandoning its use entirely. The company has also reportedly rechecked and updated every AI-written post for factual accuracy. The blurb above now simply reads: "The Hardin County Tigers defeated the Memphis Business Execs 48-12 in a Tennessee high school football game on Friday."
This article originally appeared on Engadget at https://www.engadget.com/usa-todays-publisher-had-to-update-all-of-the-sports-posts-its-ai-reporter-botched-215915908.html?src=rss
Since its release in 1993, id Software's DOOM franchise has become one of modern gaming's most easily recognizable IPs. The series has sold more than 10 million copies to date and spawned myriad RPG spinoffs, film adaptations and even a couple tabletop board games. But the first game's debut turned out to be a close thing, id Software cofounder John Romero describes in an excerpt from his new book DOOM GUY: Life in First Person. With a mere month before DOOM was scheduled for release in December 1993, the iD team found itself still polishing and tweaking lead programmer John Carmack's novel peer-to-peer multiplayer architecture, ironing out level designs — at a time when the studio's programmers were also its QA team — and introducing everybody's favorite killer synonym to the gamer lexicon.
In early October, we were getting close to wrapping up the game, so progress quickened. On October 4, 1993, we issued the DOOM beta press release version, a build of the game we distributed externally to journalists and video game reviewers to allow them to try the game before its release. Concerned about security and leaks, we coded the beta to stop running on DOS systems after October 31, 1993. We still had useless pickups in the game, like the demonic daggers, demon chests, and other unholy items. I decided to get rid of those things because they made no sense to the core of the game and they rewarded the player with a score, which was a holdover from Wolfenstein 3-D. I removed the concept of having lives for the same reason. It was enough to have to start the level over after dying.
There was still one missing piece from the game, and it was a substantial one. We hadn’t done anything about the multiplayer aspect. In modern game development, multiplayer would be a feature factored in from day one, and architected accordingly, in an integrated fashion. Not with DOOM. It was November, and we were releasing in a month.
I brought it up to Carmack. “So when are we going to make multiplayer mode?”
The short answer was that Carmack was ready to take it on. Looking from the outside in, I suspect some might wonder if I wasn’t just more than a bit concerned since we were hoping to ship in 1993. After all, John had never programmed a multiplayer game before. The truth is that I never had a doubt, not for a second. Back in March, Carmack had already done some innovative network programming in DoomEd. He wanted to play around with the distributed objects system in NeXT-STEP, so he added the ability to allow multiple people who were running DoomEd to edit the same level. I could see him drawing lines and placing objects on my screen from his computer. Then, I’d add to his room by making a hallway, and so on.
For multiplayer, Carmack’s plan was to explore peer-to-peer networking. It was the “quick and dirty” solution instead of a client-server model. Instead of one central computer controlling and monitoring all the action between two to four players, each computer would run the game and sync up with the others. Basically, the computers send each other updates at high speed over the local network. The speed of Carmack’s network programming progress was remarkable. He had some excellent books on networking, and fortunately, those books were clearly written and explained the process of using IPX* well. In a few hours, he was communicating between two computers, getting the IPX protocol running so he could send information packets to each computer. I’d worked with him for three years and was used to seeing incredible things on his screen, but this was awe inspiring, even for him. In a matter of hours, he got two PCs talking to each other through a command-line-based tool, which proved he could send information across the network. It was the foundation needed to make the game network-capable. It was great for two players, and good for four, so we capped it at that. We were still on track to deliver on our promise of the most revolutionary game in history before the end of the year.
Carmack called me into his office to tell me he had it working. Both PCs in his office had the game open, and they were syncing up with two characters facing one another. On one PC, Carmack veered his character to the right. On the other monitor, that same character, appearing in third person, moved to the left. It was working!
“Oh my God!” I yelled, throwing in some other choice words to convey my amazement. “That is fucking incredible.”
When I’d first truly visualized the multiplayer experience, I was building E1M7. I was playing the game and imagined seeing two other players firing rockets at each other. At the time, I thought, “This is going to be astonishing. There is nothing like this. This is going to be the most amazing game planet Earth has ever seen.” Now, the moment had finally arrived.
I rushed to my computer and opened the game, connecting to Carmack’s computer.
When his character appeared on screen, I blasted him out of existence, screaming with delight as I knocked “John” out of the game with a loud, booming, bloody rocket blast. It was beyond anything I had ever experienced before and even better than I imagined it could be.
It was the future, and it was on my screen.
“This is fucking awesome!” I yelled. “This is the greatest thing ever!”
I wasn’t kidding. This was the realization of everything we put into the design months earlier. I knew DOOM would be the most revolutionary game in history, but now, it was also the most fun, all-consuming game in history. Now that all the key elements of our original design were in place, it was obvious. DOOM blew away every other game I’d ever played. From that moment on, if I wasn’t playing DOOM or working on DOOM, I was thinking about DOOM.
Kevin, Adrian, and Jay began running the game in multiplayer mode, too, competing to blow away monsters and each other. They were yelling just as much as I did, cheering every execution, groaning when they were killed and had to respawn. I watched them play. I saw the tension in their bodies as they navigated the dark, detailed world we’d created. They were hunters and targets, engaged in a kill-or-be-killed battle, not just with monsters, but with other, real people. Players were competing in real time with other people in a battle to survive. I thought of boxing or an extreme wrestling match, where you go in a cage to fight. This was much more violent, more deadly. It was all simulated, of course, but in the moment, it felt immediate. It was a new gaming experience, and I searched for a way to describe it.
“This is deathmatch,” I said. The team latched onto the name. It instantly articulated the sinister, survival vibe at the heart of DOOM.
In mid-November, we buckled down, getting in the “closing zone,” where you begin finalizing all areas of the game one by one. Now that Carmack had multiplayer networking figured out, we needed to fine-tune the gameplay and functionality, delivering two multiplayer modes—one in which players work together to kill monsters and demons, and the other where players try to kill each other (usually without monsters around). The first mode was called co-op, short for cooperative. The second, of course, was deathmatch.
Another important word needed to be coined. Deathmatch was all about getting the highest kill count in a game to be judged the winner. What would we call each kill? Well, we could call it a kill, but that felt like a less creative solution to me. Why don’t we have our own word? I went to the art room to discuss this with Kevin and Adrian.
“Hey guys, for each kill in a deathmatch we need a word for it that is not ‘kill,’” I said.
Kevin said, “Well, maybe we could use the word ‘frag.’"
“That sounds like a cool word, but what does it mean?” I asked.
“In the Vietnam War,” Kevin explained, “if a sergeant told his fire team to do something horrifically dangerous, instead of agreeing to it, they would throw a fragmentation grenade at the sergeant and call it friendly fire. The explanation was ‘Someone fragged the sarge!’”
“So, in a deathmatch we’re all fragging each other!” I said.
“Exactly."
And that is how “frag” entered the DOOM lexicon.
The introduction of deathmatch and co-op play profoundly affected the possibility space of gameplay in the levels. Crafting an enjoyable level for single-player mode with lots of tricks and traps was complex enough, but with the addition of multiplayer we had to be aware of other players in the level at the same time, and we had to make sure the single-player-designed level was fun to play in these new modes. Our levels were doing triple duty, and we had little time to test every possible situation, so we needed some simple rules to ensure quality. Since multiplayer gameplay was coming in quickly near the end of development, I had to define all the gameplay rules for co-op and deathmatch. We then had to modify every game map so that all modes worked in all difficulty levels. These are the rules I came up with quickly to help guide level quality:
Multiplayer Rule 1: A player should not be able to get stuck in an area without the possibility of respawning.
Multiplayer Rule 2: Multiple players (deathmatch or co-op mode) require more items; place extra health, ammo, and powerups.
Multiplayer Rule 3: Try to evenly balance weapon locations in deathmatch.
Multiplayer Rule 4: In deathmatch mode, try to place all the weapons in the level regardless of which level you’re in.
Additionally, we had to make all the final elements for the game: the intermissions and various menus had to be designed, drawn, and coded; the installation files needed to be created, along with the text instruction files, too. We also had to write code to allow gamers to play these multiplayer modes over their modems, since that was the hardware many people had in 1993. Compared to our previous games, the development pace on DOOM had been relatively relaxed, but in November our to-do list was crowded. Fortunately, everything fell into place. The last job for everyone was to stress-test DOOM.
Preparing for release, we knew we needed someone to handle our customer support, so earlier in the year, we’d hired Shawn Green, who quit his job at Apogee to join us. Throughout development, at every new twist and turn, we kept Shawn up to date. He had to know the game inside out to assist gamers should any issues arise. Shawn also helped us by testing the game as it went through production.
I noted earlier that id Software never had a Quality Assurance team to test our releases. For three years, John, Tom, and I doubled as the id QA team. We played our games on our PCs, pounding multiple keys, literally banging on keyboards to see if our assaults could affect the game. On the verge of release, and with more people than ever before in the office, we spent thirty hours playing DOOM in every way we could think of—switching modes, hitting commands—running the game on every level in every game mode we had, using every option we added to the game to see if there were any glitches.
Things were looking good. We decided to run one last “burn-in” test, a classic test for games where the developers turn the game on and let it run overnight. We ran DOOM on every machine in the office. The plan was to let it run for hours to see if anything bad happened. After about two hours of being idle, the game froze on a couple screens. The computers seemed to be okay—if you hit “escape” the menu came up—but the game stopped running.
We hadn’t seen a bug like this during development, but Carmack was on the case. He was thinking and not saying a word, evidently poring over the invisible engine map in his head. Ten minutes passed before he figured it out. He concluded that we were using the timing chip in the PC to track the refresh of the screen and process sound, but we weren’t clearing the timing chip counter when the game started, which was causing the glitch. Ironically, this logic had been part of the engine from day one, so it was surprising we hadn’t noticed it before.
He sat down at his computer, fixed the bug, and made a new build of the game. We put the update on all the machines and held our breath for the next two hours.
Problem solved.
That was the last hurdle. We were ready to launch. That day, December 10, would be DOOM Day.
***
* IPX is an acronym for Internetwork Packet Exchange. In sum, it is a way in which computers can talk to one another.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-doom-guy-john-romero-abrams-press-143005383.html?src=rss
In an unprecedented decision, Fulton County Judge Scott McAfee announced on Thursday that he will allow not only a press pool, cameras and laptops to be present in the courtroom during the election interference trial of former President Donald Trump, but that the entire proceedings will be livestreamed on YouTube as well. That stream will be operated by the court.
Trump and 18 co-defendants are slated their trial on October 23rd. Tsplhey're facing multiple racketeering charges surrounding their efforts in the state of Georgia to subvert and overturn the results of the 2020 presidential election, what Fulton County DA Fanni Harris describes as "a criminal enterprise" to unconstitutionally keep the disgraced politician in power. Trump has pled not guilty to all charges.
While recording court proceedings can be an uncommon occurrence in some jurisdictions, the state of Georgia takes a far more lax approach in allowing the practice.
“Georgia courts traditionally have allowed the media and the public in so that everyone can scrutinize how our process actually works,” Atlanta-based attorney Josh Schiffer, told Atlanta First News. “Unlike a lot of states with very strict rules, courts in Georgia are going to basically leave it up to the judges.”
For example, when Trump was arraigned in New York on alleged financial crimes, only still photography was allowed. For his Miami charges, photography wasn't allowed at all. This means that the public will not be privy to the in-court proceedings of Trump's federal election interference case, only the Georgia state prosecution.
This article originally appeared on Engadget at https://www.engadget.com/trumps-georgia-election-interference-trial-will-be-livestreamed-on-youtube-193146662.html?src=rss