Posts with «author_name|andrew tarantola» label

Hitting the Books: Why we haven't made the 'Citizen Kane' of gaming

Steven Spielberg's wholesome sci-fi classic, E.T. the Extra-Terrestrial, became a cultural touchstone following its release in 1982. The film's hastily-developed (as in, "you have five weeks to get this to market") Atari 2600 tie-in game became a cultural touchstone for entirely different reasons.

In his new book, The Stuff Games Are Made Of, experimental game maker and assistant professor in design and computation arts at Concordia University in Montreal, Pippin Barr deconstructs the game design process using an octet of his own previous projects to shed light on specific aspects of how games could better be put together. In the excerpt below, Dr. Barr muses in what makes good cinema versus games and why the storytelling goals of those two mediums may not necessarily align.

MIT Press

Excerpted from The Stuff Games Are Made Of by Pippin Barr. Reprinted with permission from The MIT Press. Copyright 2023.


In the Atari 2600 video game version of the film E.T. the Extra-Terrestrial (Spielberg 1982), also called E. T. the Extra-Terrestrial (Atari 1982), the defining experience is falling into a pit. It’s cruelly fitting, then, that hundreds of thousands of the game’s physical cartridges were buried in a landfill in 1983. Why? It was one of the most spectacular failures in video game history. Why? It’s often put front and center as the worst game of all time. Why? Well, when you play it, you keep falling into a pit, among other things ...

But was the video game E.T. so terrible? In many ways it was a victim of the video game industry’s voracious hunger for “sure fire” blockbusters. One strategy was to adapt already-popular movies like Raiders of the Lost Ark or, yes, E.T. the Extra-Terrestrial. Rushed to market with a development time of only five weeks, the game inevitably lacked the careful crafting of action-oriented gameplay backed by audience testing that other Atari titles had. I would argue, though, that its creator, Howard Scott Warshaw, found his way into a more truthful portrayal of the essence of the film than you might expect.

Yes, in the game E.T. is constantly falling into pits as he flees scientists and government agents. Yes, the game is disorienting in terms of understanding what to do, with arcane symbols and unclear objectives. But on the other hand, doesn’t all that make for a more poignant portrayal of E.T.’s experience, stranded on an alien planet, trying to get home? What if E.T. the Extra-Terrestrial is a good adaptation of the film, and just an unpopular video game?

The world of video games has admired the world of film from the beginning. This has led to a long-running conversation between game design and the audiovisual language of cinema, from cutscenes to narration to fades and more. In this sense, films are one of the key materials games are made of. However, even video games’ contemporary dominance of the revenue competition has not been quite enough to soothe a nagging sense that games just don’t measure up. Roger Ebert famously (and rather overbearingly) claimed that video games could “never be art,” and although we can mostly laugh about it now that we have games like Kentucky Route Zero and Disco Elysium, it still hurts. What if Ebert was right in the sense that video games aren’t as good at being art as cinema is?

Art has seldom been on game studios’ minds in making film adaptations. From Adventures of Tron for the Atari 2600 to Toy Story Drop! on today’s mobile devices, the video game industry has continually tried for instant brand recognition and easy sales via film. Sadly, the resulting games tend just to lay movie visuals and stories over tried-and-true game genres such as racing, fighting, or match 3. And the search for films that are inherently “video game-y” hasn’t helped much either. In Marvel’s Spider-Man: Miles Morales, Spider-Man ends up largely as a vessel for swinging and punching, and you certainly can’t participate in Miles’s inner life. So what happened to the “Citizen Kane of video games”?

A significant barrier has been game makers’ obsession with the audiovisual properties of cinema, the specific techniques, rather than some of the deeply structural or even philosophical opportunities. Film is exciting because of the ways it unpacks emotion, represents space, deploys metaphor, and more. To leverage the stuff of cinema, we need to take a close look at these other elements of films and explore how they might become the stuff of video games too. One way to do that in an organized way is to focus on adaptation, which is itself a kind of conversation between media that inevitably reveals much about both. And if you’re going to explore film adaptation to find the secret recipe, why not go with the obvious? Why not literally make Citizen Kane (Welles 1941) into a video game? Sure, Citizen Kane is not necessarily the greatest film of all time, but it certainly has epic symbolic value. Then again, Citizen Kane is an enormous, complex film with no car chases and no automatic weapons. Maybe it’s a terrible idea.

As video games have ascended to a position of cultural and economic dominance in the media landscape, there has been a temptation to see film as a toppled Caesar, with video games in the role of a Mark Antony who has “come to bury cinema, not to praise it.” But as game makers, we haven’t yet mined the depths offered by cinema’s rich history and its exciting contemporary voices. Borrowing cinema’s visual language of cameras, points of view, scenes, and so on was a crucial step in figuring out how video games might be structured, but the stuff of cinema has more to say than that. Citizen Kane encourages us to embrace tragedy and a quiet ending. The Conversation shows us that listening can be more powerful than action. Beau Travail points toward the beauty of self-expression in terrible times. Au Hasard Balthazar brings the complex weight of our own responsibilities to the fore.

There’s nothing wrong with an action movie or an action video game, but I suggest there’s huge value in looking beyond the low-hanging fruit of punch-ups and car chases to find genuinely new cinematic forms for the games we play. I’ll never play a round of Combat in the same way, thanks to the specter of Travis Bickle psyching himself up for his fight against the world at large. It’s time to return to cinema in order to think about what video games have been and what they can be. Early attempts to adapt films into games were perhaps “notoriously bad” (Fassone 2020), but that approach remains the most direct way for game designers to have a conversation with the cinematic medium and to come to terms with its potential. Even if we accept the idea that E.T. was terrible, which I don’t, it was also different and new.

This is bigger than cinema, though, because we’re really talking about adaptation as a form of video game design. While cinema (and television) is particularly well matched, all other media from theater to literature to music are teeming with ideas still untried in the youthful domain of video games. One way to fast-track experimentation is of course to adapt plays, poems, and songs. To have those conversations. There can be an air of disdain for adaptations compared to originals, but I’m with Linda Hutcheon (2012, 9) who asserts in A Theory of Adaptation that “an adaptation is a derivation that is not derivative — a work that is second without being secondary.” As Jay Bolter and Richard Grusin (2003, 15) put it, “what is new about new media comes from the particular ways in which they refashion older media.” This is all the more so when the question is how to adapt a specific work in another medium, where, as Hutcheon claims, “the act of adaptation always involves both (re-)interpretation and then (re-)creation." That is, adaptation is inherently thoughtful and generative; it forces us to come to terms with the source materials in such a direct way that it can lay our design thinking bare—the conversation is loud and clear. As we’ve seen, choosing films outside the formulas of Hollywood blockbusters is one way to take that process of interpretation and creation a step further by exposing game design to more diverse cinematic influences.

Video games are an incredible way to explore not just the spaces we see on-screen, but also “the space of the mind." When a game asks us to act as a character in a cinematic world, it can also ask us to think as that character, to weigh our choices with the same pressures and history they are subject to. Hutcheon critiques games’ adaptive possibilities on the grounds that their programming has “an even more goal- directed logic than film, with fewer of the gaps that film spectators, like readers, fill in to make meaning." To me, this seems less like a criticism and more like an invitation to make that space. Quiet moments in games, as in films, may not be as exhilarating as a shoot-out, but they can demand engagement in a way that a shoot-out can’t. Video games are ready for this.

The resulting games may be strange children of their film parents, but they’ll be interesting children too, worth following as they grow up. Video game film adaptations will never be films, nor should they be—they introduce possibilities that not only recreate but also reimagine cinematic moments. The conversations we have with cinema through adaptation are ways to find brand new ideas for how to make games. Even the next blockbuster.

Yeah, cinema, I’m talkin’ to you.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-stuff-games-are-made-of-pippin-barr-mit-press-143054954.html?src=rss

Snapchat's My AI chatbot glitched so hard it started posting Stories

My AI, the in-app digital assistant that rides herd on your Snapchat Plus experience, has suffered numerous breakdowns and technical malfunctions since its debut in February. Tuesday was more of the same as the chatbot took it upon itself to post single-second-long Stories to users' feeds and then go unresponsive for extended periods of time. Thing is, My AI doesn't have the capacity to post to Stories. And now it's got a bunch of people on Twitter wondering if we're at the dawning of the Singularity.

Did Snapchat Ai just add a picture of my wall/ceiling to their Snapchat story?

Snapchat AI - Left

My wall/ceiling- Right pic.twitter.com/bh8I3Aiwun

— Matt Esparza (@matthewesp) August 16, 2023

As first reported by TechCrunch, the My AI chatbot posted a two-tone image in Stories, which one user mistook for a shot of their ceiling. What's more, upon being asked about the mysterious post, the bot would either go dark or respond that it was suffering a technical issue. This explanation proved insufficient for many users, causing minor panic and jokes about the AI system's imminent awakening. In the end however, it really was a technical issue.

“My AI experienced a temporary outage that’s now resolved,” a spokesperson told TechCrunch, adding that "At this time, My AI does not have Stories feature."

The My AI bot is bundled as part of the company's $3.99/month Snapchat Plus package and offers users a variety of features. These include AR filter recommendations and suggestions for restaurant and activities based on currently popular places on the Snap Map. It also offers AI functionality in group chat, photo and video snaps, and text messages, as well as an AI persona. A text-to-image genAI is also reportedly in the works, though the ads have already arrived.

This article originally appeared on Engadget at https://www.engadget.com/snapchats-my-ai-chatbot-glitched-so-hard-it-started-posting-stories-190809341.html?src=rss

NCSoft's new AI suite is trained to streamline game production

Despite being publicly available for less than a year, generative AI technology can already be found all around us, helping us browse the internet, taking the drudgery out of computer coding, and even improving the dialog in popular video game franchises. On Wednesday, NCSoft, the South Korean game developer and publisher behind long-running MMORPG Guild Wars, announced that it has developed four new AI large language models, dubbed VARCO, to help streamline future game development.

VARCO ("Via AI, Realize your Creativity and Originality," if you squint just right) is both the quartet of language models the company has developed, as well as all of the products and services the company plans to build atop them. Those potential products include, “digital humans, generative AI platforms, and conversational language models,” per an NCSoft release.

The four models are VARCO the base LLM, as well as Art, Text and Human. LLM will be the first released — the Korean-language version is available August 16, while the English and bilingual iterations will arrive by the end of the month. LLM be trained with 1.3B, 6.4B and 13B parameters to start with larger versions made available later in the year.

"Our LLM is trained with datasets that are either publicly available for pretraining, collected from the Internet or internally constructed,” Jehee Lee, CRO of NCSOFT, told Engadget via email. “We are putting efforts to improve the performance of LLM and generate text that does not undermine the universal values of society.”

"Bias is one of our biggest concerns in the process of constructing data," Lee continued, “ In order to build high-quality datasets, NC utilizes a pipeline where collected data are analyzed and evaluated in various aspects (e.g. format, content, ethics) and are refined based on our own criteria."

Larger-parameter versions of LLM will arrive later in the year, allowing for more nuanced and complete responses from the system. “As the size of the model increases, the performance increases,” Lee notes, “but the operating cost also increases accordingly.” As such ensuring that models are developed to be both computationally powerful and computationally lightweight is of tantamount importance, he continued.

“NC has accumulated technologies for model weight reduction and optimization as we have been applying real-time machine translation and NLP-based technologies to our games that require large-scale traffic processing in real-time,” Lee explained. “Based on our experiences, we plan to develop and sequentially release high-performance lightweight models specialized for various individual tasks in the future.”

The three additional services will be based upon that foundational model. VARCO Art, the text-to-image generator, will reportedly be capable of closely matching the work of specific artists (with their permission of course). “With dozens of illustrations worked consistently by an artist, generative models can be trained to reveal enough of a particular style, such as coloring, touches, and outlines,” Lee said. “ Thus far, artists or experts can spot the difference, but it is not easy for ordinary people to distinguish if it is generated by AI or not.”

Additionally, VARCO Text does just that, generating and managing the core settings of a game including the plot scenarios and character worldviews, while VARCO Human, “an integrated tool for creating, editing, and managing digital humans,” per the release. Art, Text, and Human can all be used and managed within the company’s VACRO Studio suite, which will be available in 2024.

The VARCO models will initially be used to help streamline game development efforts, much in the same way as Ubisoft’s Ghostwriter. However, VARCO sets itself apart, Lee explains, in that it is a “special Vertical AI that can directly solve specific pain points regardless of industry and domain.”

“The generative model can be used for the planning, development, and operation of games.,” Lee said. “The worldview of the game, character name, type, and attributes can be created and edited. Also, it creates conversations for specific situations and quests tailored to regions,” as well as accompanying images.

And like Ghostwriter, VARCO cannot operate in isolation. “We need final human judgment and additional work in order to produce sophisticated results,” Lee said. He argues that, ”generative AI technology will contribute to further increasing the value of human labor,” by distinguishing between the two.

In allowing AI to handle the low level repetitive tasks that perpetually slow game development, human designers will be freed to “focus on more complex tasks,” Lee said. “The results of generative AI will not be the final product, but rather it will serve to inspire humans and help them quickly reach higher goals with the help of AI’s emergent and large-scale creation capabilities.”

But games are only the beginning for NC’s AI aspirations. “we want to create new high value-added businesses by entering different fields beyond games,” Less said. “I think we can advance with our AI in any industrial field rather than just one field. NC plans to pay attention to new growth fields that we can expect greater potential and synergy when combined with AI, such as fashion, life, health, mobility, bio, robotics, and content."

This article originally appeared on Engadget at https://www.engadget.com/ncsofts-new-ai-suite-is-trained-to-streamline-game-production-141653946.html?src=rss

An Iowa school district is using AI to ban books

It certainly didn't take long for AI's other shoe to drop, what with the emergent technology already being perverted to commit confidence scams and generate spam content. We can now add censorship to that list as the Globe Gazette reports the school board of Mason City, Iowa has begun leveraging AI technology to cultivate lists of potentially bannable books from the district's libraries ahead of the 2023/24 school year. 

In May, the Republican-controlled state legislature passed, and Governor Kim Reynolds subsequently signed, Senate File 496 (SF 496), which enacted sweeping changes to the state's education curriculum. Specifically it limits what books can be made available in school libraries and classrooms, requiring titles to be "age appropriate” and without “descriptions or visual depictions of a sex act,” per Iowa Code 702.17.

But ensuring that every book in the district's archives adhere to these new rules is quickly turning into a mammoth undertaking. "Our classroom and school libraries have vast collections, consisting of texts purchased, donated, and found," Bridgette Exman, assistant superintendent of curriculum and instruction at Mason City Community School District, said in a statement. "It is simply not feasible to read every book and filter for these new requirements." 

As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a "master list" is first cobbled together from "several sources" based on whether there were previous complaints of sexual content. Books from that list are then scanned by "AI software" — the district doesn't specify which systems will be employed — which tells the state censors whether or not there actually is a depiction of sex in the book. 

“Frankly, we have more important things to do than spend a lot of time trying to figure out how to protect kids from books,” Exman told PopSci via email. “At the same time, we do have a legal and ethical obligation to comply with the law. Our goal here really is a defensible process.”

So far, the AI has flagged 19 books for removal. They are as follows:

This article originally appeared on Engadget at https://www.engadget.com/mason-city-iowa-school-district-ai-book-ban-censorship-202541565.html?src=rss

Hitting the Books: The thirty-year quest to make WiFi a connectivity reality

The modern world of consumer tech wouldn't exist as we know it if not for the near-ubiquitous connectivity that Wi-Fi internet provides. It serves as the wireless link bridging our mobile devices and smart home appliances, enabling our streaming entertainment and connecting us to the global internet. 

In his new book, Beyond Everywhere: How Wi-Fi Became the World’s Most Beloved Technology, Greg Ennis, who co-authored the proposal that became the technical basis for WiFi technology before founding the Wi-Fi Alliance and serving as its VP of Technology for a quarter century, guides readers on the fascinating (and sometimes frustrating) genesis of this now everyday technology. In the excerpt below, Ennis recounts the harrowing final days of pitching and presentations before ultimately convincing the IEEE 802.11 Wireless LAN standards committee to adopt their candidate protocol as well as examine the outside influence that Bob Metcalf — inventor of both Ethernet, the standard, and 3Com, the tech company — had on Wi-Fi's eventual emergence.

Post Hill Press

Excerpted from Beyond Everywhere: How Wi-Fi Became the World’s Most Beloved Technology (c) 2023 by Greg Ennis. Published by Post Hill Press. Used with permission.


With our DFWMAC foundation now chosen, the work for the IEEE committee calmed down into a deliberate process for approving the actual text language for the standard. There were still some big gaps that needed to be filled in—most important being an encryption scheme—but the committee settled into a routine of developing draft versions of the MAC sections of the ultimate standard document. At the January 1994 meeting in San Jose, I was selected to be Technical Editor of the entire (MAC+PHY) standard along with Bob O’Hara, and the two of us would continue to serve as editors through the first publication of the final standard in 1997. 

The first draft of the MAC sections was basically our DFWMAC specification reformatted into the IEEE template. The development of the text was a well-established process within IEEE standards committees: as Bob and I would complete a draft, the members of the committee would submit comments, and at the subsequent meeting, there would be debates and decisions on improvements to the text. There were changes made to the packet formats, and detailed algorithmic language was developed for the operations of the protocol, but by and large, the conceptual framework of DFWMAC was left intact. In fact, nearly thirty years after DFWMAC was first proposed, its core ideas continue to form the foundation for Wi-Fi.

 While this text-finalization process was going on, the technology refused to stand still. Advances in both radio communications theory and circuit design meant that higher speeds might be possible beyond the 2-megabit maximum in the draft standard. Many companies within the industry were starting to look at higher speeds even before the original standard was finally formally adopted in 1997. Achieving a speed greater than 10 megabits — comparable to standard Ethernet — had become the wireless LAN industry’s Holy Grail. The challenge was to do this while staying within the FCC’s requirements — something that would require both science and art. 

Faster is always better, of course, but what was driving the push for 10 megabits? What wireless applications were really going to require 10-megabit speeds? The dominant applications for wireless LANs in the 1990s were the so-called “verticals” — for example, Symbol’s installations that involved handheld barcode scanners for inventory management. Such specialized wireless networks were installed by vertically integrated system providers offering a complete service package, including hardware, software, applications, training, and support, hence the “vertical” nomenclature. While 10-megabit speeds would be nice for these vertical applications, it probably wasn’t necessary, and if the cost were to go up, such speeds wouldn’t be justifiable. So instead, it would be the so-called “horizontal” market — wireless connectivity for general purpose computers — that drove this need for speed. In particular, the predominantly Ethernet-based office automation market, with PCs connected to shared printers and file servers, was seen as requiring faster speeds than the IEEE standard’s 2 megabits.

Bob Metcalfe is famous in the computer industry for three things: Ethernet, Metcalfe’s Law, and 3Com. He co-invented Ethernet; that’s simple enough and would be grounds for his fame all by itself. Metcalfe’s Law— which, of course, is not actually a law of physics but nonetheless seems to have real explanatory power— states that the value of a communication technology is proportional to the square of the number of connected devices. This intuitively plausible “law” explains the viral snowball effect that can result from the growing popularity of a network technology. But it would be Metcalfe’s 3Com that enters into our Wi-Fi story at this moment. 

Metcalfe invented Ethernet while working at PARC, the Xerox Palo Alto Research Center. PARC played a key role in developing many of the most important technologies of today, including window-based graphic computer interfaces and laser printing, in addition to Ethernet. But Xerox is famous for “Fumbling the Future,” also the title of a 1999 book documenting how “Xerox invented, then ignored, the first personal computer,” since the innovations developed at PARC generally ended up being commercialized not by Xerox but by Apple and others. Not surprisingly, Metcalfe decided he needed a different company to take his Ethernet invention to the market, and in 1979, he formed 3Com with some partners.

This was the same year I joined Sytek, which had been founded just a couple of months prior. Like 3Com, Sytek focused on LAN products, although based on broadband cable television technology in contrast to 3Com’s Ethernet. But whereas Sytek concentrated on hardware, 3Com decided to also develop their own software supporting new LAN-based office applications for shared PC access to data files and printers. With these software products in combination with their Ethernet technology, 3Com became a dominant player in the booming office automation market during the nineties that followed the introduction of personal computers. Bob Metcalfe was famously skeptical about wireless LANs. In the August 16, 1993, issue of InfoWorld, he wrote up his opinion in a piece entitled “Wireless computing will flop — permanently”:

This isn’t to say there won’t be any wireless computing. Wireless mobile computers will eventually be as common as today’s pipeless mobile bathrooms. Porta-potties are found on planes and boats, on construction sites, at rock concerts, and other places where it is very inconvenient to run pipes. But bathrooms are still predominantly plumbed. For more or less the same reasons, computers will stay wired.

Was his comparison of wireless to porta-potties just sour grapes? After all, this is coming from the inventor of Ethernet, the very archetype of a wired network. In any event, we were fortunate that Metcalfe was no longer involved with 3Com management in 1996 — because 3Com now enters our story as a major catalyst for the development of Wi-Fi. 

3Com’s strategy for wireless LANs was naturally a subject of great interest, as whatever direction they decided to take was going to be a significant factor in the market. As the premier Ethernet company with a customer base that was accustomed to 10-megabit speeds, it was clear that they wouldn’t take any steps unless the wireless speeds increased beyond the 2 megabits of the draft IEEE standard. But might they decide to stay out of wireless completely, like Bob Metcalfe counselled, to focus on their strong market position with wired Ethernet? And if they did decide to join the wireless world, would they develop their own technology to accomplish this? Or would they partner with an existing wireless developer? The task of navigating 3Com through this twisted path would fall to a disarmingly boyish business development whiz named Jeff Abramowitz, who approached me one afternoon quite unexpectedly. 

Jeff tapped me on the shoulder at an IEEE meeting. “Hey, Greg, can I talk with you for a sec?” he whispered, and we both snuck quietly out of the meeting room. “Just wondering if you have any time available to take on a new project.” He didn’t even give me a chance to respond before continuing with a smile: “10 megabits. Wireless Ethernet.” The idea of working with the foremost Ethernet company on a high-speed version of 802.11 obviously enticed me, and I quickly said, “Let’s get together next week.”

He told me that they had already made some progress towards an internally developed implementation, but that in his opinion, it was more promising for them to partner with one of the major active players. 3Com wanted to procure a complete system of  wireless LAN products that they could offer to their customer base, comprising access points and plug-in adapters (“client devices”) for both laptops and desktops. There would need to be a Request for Proposal developed, which would, of course, include both technical and business requirements, and Jeff looked to me to help formulate the technical requirements. The potential partners included Symbol, Lucent, Aironet, InTalk, and Harris Semiconductor, among others, and our first task was to develop this RFP to send out to these companies. 

Symbol should need no introduction, having been my client and having played a major role in the development of the DFWMAC protocol that was selected as the foundation for the 802.11 standard. Lucent may sound like a new player, but in fact, this is simply our NCR Dutch colleagues from Utrecht — including Wim, Cees, Vic, and Bruce — under a new corporate name, NCR having been first bought by AT&T and then spun off into Lucent. Aironet is similarly an old friend under a new name — back at the start of our story, we saw that the very first wireless LAN product approved by the FCC was from a Canadian company called Telesystems, which eventually was merged into Telxon, with Aironet then being the result of a 1994 spinoff focusing on the wireless LAN business. And in another sign of the small-world nature of the wireless LAN industry at this time, my DFWMAC co-author, Phil Belanger, had moved from Xircom to Aironet in early 1996. 

The two companies here who are truly new to our story are InTalk and Harris. InTalk was a small startup founded in 1996 in Cambridge, England (and then subsequently acquired by Nokia), whose engineers were significant contributors to the development of the final text within the 802.11 standard. Harris Corporation was a major defense contractor headquartered in Melbourne, Florida, who leveraged their radio system design experience into an early wireless LAN chip development project. Since they were focused on being a chip supplier rather than an equipment manufacturer, we didn’t expect them to submit their own proposal, but it was likely that other responders would incorporate their chips, so we certainly viewed them as an important player. 

Over the first couple of months in 1997, Jeff and I worked up a Request for Proposal for 3Com to send out, along with a 3Com engineer named David Fisher, and by March we were able to provide the final version to various candidate partners. Given 3Com’s position in the general LAN market, the level of interest was high, and we indeed got a good set of proposals back from the companies we expected, including Symbol, Lucent, InTalk, and Aironet. These companies, along with Harris, quickly became our focus, and we began a process of intense engagement with all of them over the next several months, building relationships in the process that a year later would ultimately lead to the formation of the Wi-Fi Alliance. 

Bob Metcalfe’s wireless skepticism had been soundly rejected by the very company he founded, with 3Com instead adopting the mantle of wireless evangelism. And Wireless Ethernet, soon to be christened Wi-Fi, was destined to outshine its wired LAN ancestor.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-beyond-everywhere-greg-ennis-post-hill-press-143010153.html?src=rss

The White House's 'AI Cyber Challenge' aims to crowdsource national security solutions

Our local and state level government systems are hacked and held ransom with disheartening regularity. At the Black Hat USA Conference in Las Vegas on Wednesday, the Biden Administration revealed its plans to better defend the nation’s critical digital infrastructure: It's launching a DARPA-led challenge competition to build AI systems capable of proactively identifying and fixing software vulnerabilities. That’s right, we’re having a hackathon!

The “AI Cyber Challenge” (AIxCC) is a two-year development program open to competitors throughout the US. It’s being hosted by DARPA in collaboration with Anthropic, Google, Microsoft and OpenAI. Those companies are providing both their expertise in the field and access to their AI technologies.

“The challenge is critical in bringing together the cutting-edge in automatic software, security and AI, which will empower our cyber defenses by being able to quickly exploit and fix software vulnerabilities,” Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology, said during a press call Tuesday.

“This is one of the ways that public and private sectors work together to do big things to change how the future unfolds,” Arati Prabhakar, Director of the White House Office of Science and Technology Policy, added. “That's why the White House asked DARPA to take on the critical topic of AI for cybersecurity.”

White House officials concede that properly securing the nation’s sprawling federal software systems against intrusion is a daunting task. “They don't have the tools capable of security at this scale,” Perri Adams, Program Manager, Information Innovation Office, DARPA, said during the call. “We've seen in recent years, hackers exploiting the state of affairs, posing a serious national security risk.”

Despite those vulnerabilities, “I think we have to keep one step ahead and AI offers a very promising approach for that,” Adams said. There’s nearly $20 million in prize money up for grabs. And to ensure that the competition isn’t dominated by the teams with the deepest pockets, DARPA is making $7 million available to small businesses who want to compete as well.

The research agency will hold an open qualifying event next spring where the top scoring teams (up to 20 can potentially qualify) will get invited to the semifinals at DEF CON 24. That cohort will be whittled down to the top five teams, who will win monetary prizes at the competition and be invited back to DEF CON 25 for the finals. The top three scoring teams from DC25 will win even more money. You land first place, you get $4 million — but to do so, your AI had better be able to, “rapidly defend critical infrastructure codes from attack,” per White House officials. Ideally, the resulting system would scour networks seeking out and autonomously repairing any software security bugs it finds. 

The winning team will also be strongly encouraged to open-source their resulting program. The competition is bringing on The Open Source Security Foundation (OpenSSF), a Linux Foundation project, as an advisor to the challenge. Their job is to help ensure that the code is put to use immediately, “by everyone from volunteer, open-source developers to commercial industry,” Adams said. “If we're successful, I hope to see AIxCC not only produce the next generation of cybersecurity tools in this space, but show how AI can be used to better society by defending its critical underpinnings.”

“The president has been completely clear that we have got to get AI right for the American people,” Prabhakar said. Last fall the Biden White House unveiled its Blueprint for an AI Bill of Rights, which defined the Administration’s core values and goals on the subject. Follow-up efforts included pushing for an AI risk management framework and investing $140 million in establishing seven new national research institutes to AI and machine learning. In July, the White House also wrangled a number of leading AI companies to agree to (non-binding) assertions that they will develop their products responsibly.

This article originally appeared on Engadget at https://www.engadget.com/the-white-houses-ai-cyber-challenge-aims-to-crowdsource-national-security-solutions-170003434.html?src=rss

Why humans can't use natural language processing to speak with the animals

We’ve been wondering what goes on inside the minds of animals since antiquity. Dr. Doolittle’s talent was far from novel when it was first published in 1920; Greco-Roman literature is lousy with speaking animals, writers in Zhanguo-era China routinely ascribed language to certain animal species and they’re also prevalent in Indian, Egyptian, Hebrew and Native American storytelling traditions.

Even today, popular Western culture toys with the idea of talking animals, though often through a lens of technology-empowered speech rather than supernatural force. The dolphins from both Seaquest DSV and Johnny Mnemonic communicated with their bipedal contemporaries through advanced translation devices, as did Dug the dog from Up.

We’ve already got machine-learning systems and natural language processors that can translate human speech into any number of existing languages, and adapting that process to convert animal calls into human-interpretable signals doesn’t seem that big of a stretch. However, it turns out we’ve got more work to do before we can converse with nature.

What is language?

“All living things communicate,” an interdisciplinary team of researchers argued in 2018’s On understanding the nature and evolution of social cognition: a need for the study of communication. “Communication involves an action or characteristic of one individual that influences the behavior, behavioral tendency or physiology of at least one other individual in a fashion typically adaptive to both.”

From microbes, fungi and plants on up the evolutionary ladder, science has yet to find an organism that exists in such extreme isolation as to not have a natural means of communicating with the world around it. But we should be clear that “communication” and “language” are two very different things.

“No other natural communication system is like human language,” argues the Linguistics Society of America. Language allows us to express our inner thoughts and convey information, as well as request or even demand it. “Unlike any other animal communication system, it contains an expression for negation — what is not the case … Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation.”

That’s not to say that pets don’t understand us. “We know that dogs and cats can respond accurately to a wide range of human words when they have prior experience with those words and relevant outcomes,” Dr. Monique Udell, Director of the Human-Animal Interaction Laboratory at Oregon State University, told Engadget. “In many cases these associations are learned through basic conditioning,” Dr. Udell said — like when we yell “dinner” just before setting out bowls of food.

Whether or not our dogs and cats actually understand what “dinner” means outside of the immediate Pavlovian response — remains to be seen. “We know that at least some dogs have been able to learn to respond to over 1,000 human words (labels for objects) with high levels of accuracy,” Dr. Udell said. “Dogs currently hold the record among non-human animal species for being able to match spoken human words to objects or actions reliably,” but it’s “difficult to know for sure to what extent dogs understand the intent behind our words or actions.”

Dr. Udell continued: “This is because when we measure a dog or cat’s understanding of a stimulus, like a word, we typically do so based on their behavior.” You can teach a dog to sit with both English and German commands, but “if a dog responds the same way to the word ‘sit’ in English and in German, it is likely the simplest explanation — with the fewest assumptions — is that they have learned that when they sit in the presence of either word then there is a pleasant consequence.”

Tea Stražičić for Engadget/Silica Magazine

Hush, the computers are speaking

Natural Language Programming (NLP) is the branch of AI that enables computers and algorithmic models to interpret text and speech, including the speaker’s intent, the same way we meatsacks do. It combines computational linguistics, which models the syntax, grammar and structure of a language, and machine-learning models, which “automatically extract, classify, and label elements of text and voice data and then assign a statistical likelihood to each possible meaning of those elements,” according to IBM. NLP underpins the functionality of every digital assistant on the market. Basically any time you’re speaking at a “smart” device, NLP is translating your words into machine-understandable signals and vice versa.

The field of NLP research has undergone a significant evolution in recent years, as its core systems have migrated from older Recurrent and Convoluted Neural Networks towards Google’s Transformer architecture, which greatly increases training efficiency.

Dr. Noah D. Goodman, Associate Professor of Psychology and Computer Science, and Linguistics at Stanford University, told Engadget that, with RNNs, “you'll have to go time-step by time-step or like word by word through the data and then do the same thing backward.” In contrast, with a transformer, “you basically take the whole string of words and push them through the network at the same time.”

“It really matters to make that training more efficient,” Dr. Goodman continued. “Transformers, they're cool … but by far the biggest thing is that they make it possible to train efficiently and therefore train much bigger models on much more data.”

Talkin’ jive ain’t just for turkeys

While many species’ communication systems have been studied in recent years — most notably cetaceans like whales and dolphins, but also the southern pied babbler, for its song’s potentially syntactic qualities, and vervet monkeys’ communal predator warning system — none have shown the sheer degree of complexity as the call of the avian family Paridae: the chickadees, tits and titmice.

Dr. Jeffrey Lucas, professor in the Biological Sciences department at Purdue University, told Engadget that the Paridae call “is one of the most complicated vocal systems that we know of. At the end of the day, what the [field’s voluminous number of research] papers are showing is that it's god-awfully complicated, and the problem with the papers is that they grossly under-interpret how complicated [the calls] actually are.”

These parids often live in socially complex, heterospecific flocks, mixed groupings that include multiple songbird and woodpecker species. The complexity of the birds’ social system is correlated with an increased diversity in communications systems, Dr. Lucas said. “Part of the reason why that correlation exists is because, if you have a complex social system that's multi-dimensional, then you have to convey a variety of different kinds of information across different contexts. In the bird world, they have to defend their territory, talk about food, integrate into the social system [and resolve] mating issues.”

The chickadee call consist of at least six distinct notes set in an open-ended vocal structure, which is both monumentally rare in non-human communication systems and the reason for the Chickadee’s call complexity. An open-ended vocal system means that “increased recording of chick-a-dee calls will continually reveal calls with distinct note-type compositions,” explained the 2012 study, Linking social complexity and vocal complexity: a parid perspective. “This open-ended nature is one of the main features the chick-a-dee call shares with human language, and one of the main differences between the chick-a-dee call and the finite song repertoires of most songbird species.”

Tea Stražičić for Engadget/Silica Magazine

Dolphins have no need for kings

Training language models isn’t simply a matter of shoving in large amounts of data. When training a model to translate an unknown language into what you’re speaking, you need to have at least a rudimentary understanding of how the the two languages correlate with one another so that the translated text retains the proper intent of the speaker.

“The strongest kind of data that we could have is what's called a parallel corpus,” Dr. Goodman explained, which is basically having a Rosetta Stone for the two tongues. In that case, you’d simply have to map between specific words, symbols and phonemes in each language — figure out what means “river” or “one bushel of wheat” in each and build out from there.

Without that perfect translation artifact, so long as you have large corpuses of data for both languages, “it's still possible to learn a translation between the languages, but it hinges pretty crucially on the idea that the kind of latent conceptual structure,” Dr. Goodman continued, which assumes that both culture’s definitions of “one bushel of wheat” are generally equivalent.

Goodman points to the word pairs ’man and woman’ and ’king and queen’ in English. “The structure, or geometry, of that relationship we expect English, if we were translating into Hungarian, we would also expect those four concepts to stand in a similar relationship,” Dr. Goodman said. “Then effectively the way we'll learn a translation now is by learning to translate in a way that preserves the structure of that conceptual space as much as possible.”

Having a large corpus of data to work with in this situation also enables unsupervised learning techniques to be used to “extract the latent conceptual space,” Dr. Goodman said, though that method is more resource intensive and less efficient. However, if all you have is a large corpus in only one of the languages, you’re generally out of luck.

“For most human languages we assume the [quartet concepts] are kind of, sort of similar, like, maybe they don't have ‘king and queen’ but they definitely have ‘man and woman,’” Dr. Goodman continued. ”But I think for animal communication, we can't assume that dolphins have a concept of ‘king and queen’ or whether they have ‘men and women.’ I don't know, maybe, maybe not.”

And without even that rudimentary conceptual alignment to work from, discerning the context and intent of a animal’s call — much less, deciphering the syntax, grammar and semantics of the underlying communication system — becomes much more difficult. “You're in a much weaker position,” Dr. Goodman said. “If you have the utterances in the world context that they're uttered in, then you might be able to get somewhere.”

Basically, if you can obtain multimodal data that provides context for the recorded animal call — the environmental conditions, time of day or year, the presence of prey or predator species, etc — you can “ground” the language data into the physical environment. From there you can “assume that English grounds into the physical environment in the same way as this weird new language grounds into the physical environment’ and use that as a kind of bridge between the languages.”

Unfortunately, the challenge of translating bird calls into English (or any other human language) is going to fall squarely into the fourth category. This means we’ll need more data and a lot of different types of data as we continue to build our basic understanding of the structures of these calls from the ground up. Some of those efforts are already underway.

The Dolphin Communication Project, for example, employs a combination “mobile video/acoustic system” to capture both the utterances of wild dolphins and their relative position in physical space at that time to give researchers added context to the calls. Biologging tags — animal-borne sensors affixed to hide, hair, or horn that track the locations and conditions of their hosts — continue to shrink in size while growing in both capacity and capability, which should help researchers gather even more data about these communities.

What if birds are just constantly screaming about the heat?

Even if we won’t be able to immediately chat with our furred and feathered neighbors, gaining a better understanding of how they at least talk to each other could prove valuable to conservation efforts. Dr. Lucas points to a recent study he participated in that found environmental changes induced by climate change can radically change how different bird species interact in mixed flocks. “What we showed was that if you look across the disturbance gradients, then everything changes,” Dr. Lucas said. “What they do with space changes, how they interact with other birds changes. Their vocal systems change.”

“The social interactions for birds in winter are extraordinarily important because you know, 10 gram bird — if it doesn't eat in a day, it's dead,” Dr. Lucas continued. “So information about their environment is extraordinarily important. And what those mixed species flocks do is to provide some of that information.”

However that network quickly breaks down as the habitat degrades and in order to survive “they have to really go through fairly extreme changes in behavior and social systems and vocal systems … but that impacts fertility rates, and their ability to feed their kids and that sort of thing.”

Better understanding their calls will help us better understand their levels of stress, which can serve both modern conservation efforts and agricultural ends. “The idea is that we can get an idea about the level of stress in [farm animals], then use that as an index of what's happening in the barn and whether we can maybe even mitigate that using vocalizations,” Dr. Lucas said. “AI probably is going to help us do this.”

“Scientific sources indicate that noise in farm animal environments is a detrimental factor to animal health,” Jan Brouček of the Research Institute for Animal Production Nitra, observed in 2014. “Especially longer lasting sounds can affect the health of animals. Noise directly affects reproductive physiology or energy consumption.” That continuous drone is thought to also indirectly impact other behaviors including habitat use, courtship, mating, reproduction and the care of offspring. 

Conversely, 2021’s research, The effect of music on livestock: cattle, poultry and pigs, has shown that playing music helps to calm livestock and reduce stress during times of intensive production. We can measure that reduction in stress based on what sorts of happy sounds those animals make. Like listening to music in another language, we can get with the vibe, even if we can't understand the lyrics

This article originally appeared on Engadget at https://www.engadget.com/why-humans-cant-use-natural-language-processing-to-speak-with-the-animals-143050169.html?src=rss

Boeing's Starliner could be ready for crewed flights by next March

Boeing has rediscovered just how hard space can be in recent months, as its ambitious Starliner program has been repeatedly sidelined by lingering technical issues. However, the company announced at a press conference Monday that it is confident that it will have those issues ironed out by next March and will be ready to test its reusable crew capsule with live NASA astronauts aboard.

“Based on the current plans, we’re anticipating that we’re going to be ready with the spacecraft in early March. That does not mean we have a launch date in early March,” Boeing VP and Starliner manager Mark Nappi stressed during the event, per CNBC. “We’re now working with NASA – Commercial Crew program and [International Space Station] – and ULA on potential launch dates based on our readiness ... we’ll work throughout the next several weeks and see where we can get fit in and then then we’ll set a launch date.”

The Starliner has been in development for nearly fifteen years now, first being unveiled in 2010. It's Boeing's entry into the reusable crew capsule race, which is currently being dominated by SpaceX with its Dragon 2. 

The two companies were actually awarded grants at the same time in 2014 to develop systems capable of transporting astronauts to the ISS with a contract deadline of 2017. By 2016, Boeing's first scheduled launch had already been pushed from 2017 to late 2018. By April 2018, NASA was tempering its launch expectations to between 2019 and 2020.

The first uncrewed orbital test flight in late 2019 failed to reach orbit, which further delayed the project. NASA, however, did agree to pay for a second uncrewed test in August of 2021. That test never made it off the launch pad due to a "valve issue." Fixing that problem took until the following May when the follow-up test flight completed successfully.

The two subsequent preparatory attempts for a crewed flight, did not. The scheduled July 21 flight was scrubbed after faults were discovered in both the parachute system and wiring harnesses. Which brings us to March, which is when Boeing is confident its Starliner will successfully shuttle a pair of NASA astronauts to the ISS for a weeklong stay. To date, Boeing is estimated to have incurred around $1.5 billion in project cost overruns.   

This article originally appeared on Engadget at https://www.engadget.com/boeings-starliner-could-be-ready-for-crewed-flights-by-next-march-210222245.html?src=rss

Microsoft's Bing chat is available in Chrome and Safari mobile

Microsoft wasn't subtle in announcing its plans to add AI functionality to any and all of its existing products. On Monday, the company announced that, in addition to its availability on the Edge mobile browser, as well as standalone Android and iOS apps, Microsoft's Bing Chat AI chatbot will now be accessible through third-party browsers like Safari and Chrome.

The news comes as part of Microsoft's six-month commemoration of Bing Chat's public availability. The company also notes that in that time, users have engaged in more than a billion conversations with the AI and have had it generate three-quarters of a billion images. 

"This next step in the journey allows Bing to showcase the incredible value of summarized answers, image creation and more, to a broader array of people," the company release reads. Features like "longer conversations [and] chat history" remain Edge mobile exclusives, however. 

Microsoft began opening access to Bing Chat in late July, when it became available on 3rd-party desktop browsers. That version is limited as well, offering only 2,000 words per prompt on Chrome and Safari versus 4,000 on Edge. 

Bing Chat is powered by ChatGPT-4 from OpenAI but offers more up-to-date information than the system its built on, thanks to Bing Chat's access to Bing Search, which allows it access to information on events that have happened since the model was trained. In addition to the third-party browser access, the newest version of Bing Chat will also offer multimodal search, meaning users will be able to upload a photo and have the AI answer specific questions about its contents, as well as a dark mode for after-hours AI queries.  

This article originally appeared on Engadget at https://www.engadget.com/microsofts-bing-chat-is-available-in-chrome-and-safari-mobile-191240880.html?src=rss

Hitting the Books: In England's industrial mills, even the clocks worked against you

America didn't get around to really addressing child labor until the late '30s when Roosevelts New Deal took hold and the Public Contracts Act raised the minimum age to 16. Before then, kids could often look forward to spending the majorities of their days doing some of the most dangerous and delicate work required on the factory floor. It's something today's kids can look forward to as well.

InHands of Time: A Watchmaker's History, venerated watchmaker Rebecca Struthers explores how the practice and technology of timekeeping has shaped and molded the modern world through her examination of history's most acclaimed timepieces. In the excerpt below, however, we take a look at 18th- and 19th-century Britain where timekeeping was used as a means of social coercion in keeping both adult and child workers pliant and productive.

HarperCollins

Excerpted fromHands of Time: A Watchmaker's History by Rebecca Struthers. Published by Harper. Copyright © 2023 by Rebecca Struthers. All rights reserved.


Although Puritanism had disappeared from the mainstream in Europe by the time of the Industrial Revolution, industrialists, too, preached redemption through hard work — lest the Devil find work for idle hands to do. Now, though, the goal was productivity as much as redemption, although the two were often conveniently conflated. To those used to working by the clock, the provincial workers’ way of time appeared lazy and disorganized and became increasingly associated with unchristian, slovenly ways. Instead ‘time thrift’ was promoted as a virtue, and even as a source of health. In 1757, the Irish statesman Edmund Burke argued that it was ‘excessive rest and relaxation [that] can be fatal producing melancholy, dejection, despair, and often self-murder’ while hard work was ‘necessary to health of body and mind’.

Historian E.P. Thompson, in his famous essay ‘Time, Work-Discipline and Industrial Capitalism’, poetically described the role of the watch in eighteenth-century Britain as ‘the small instrument which now regulated the rhythms of industrial life’. It’s a description that, as a watchmaker, I particularly enjoy, as I’m often ‘regulating’ the watches I work on — adjusting the active hairspring length to get the watch running at the right rate — so they can regulate us in our daily lives. For the managerial classes, however, their watches dictated not just their own lives but also those of their employees.

In 1850 James Myles, a factory worker from Dundee, wrote a detailed account of his life working in a spinning mill. James had lived in the countryside before relocating to Dundee with his mother and siblings after his father was sentenced to seven years’ transportation to the colonies for murder. James was just seven years old when he managed to get a factory job, a great relief to his mother as the family were already starving. He describes stepping into ‘the dust, the din, the work, the hissing and roaring of one person to another’. At a nearby mill the working day ran for seventeen to nineteen hours and mealtimes were almost dispensed with in order to eke the very most out of their workers’ productivity, ‘Women were employed to boil potatoes and carry them in baskets to the different flats; and the children had to swallow a potato hastily … On dinners cooked and eaten as I have described, they had to subsist till half past nine, and frequently ten at night.’ In order to get workers to the factory on time, foremen sent men round to wake them up. Myles describes how ‘balmy sleep had scarcely closed their urchin eyelids, and steeped their infant souls in blessed forgetfulness, when the thumping of the watchmen’s staff on the door would rouse them from repose, and the words “Get up; it’s four o’clock,” reminded them they were factory children, the unprotected victims of monotonous slavery.’

Human alarm clocks, or ‘knocker-uppers’, became a common sight in industrial cities.* If you weren’t in possession of a clock with an alarm (an expensive complication at the time), you could pay your neighborhood knocker-upper a small fee to tap on your bedroom windows with a long stick, or even a pea shooter, at the agreed time. Knocker-uppers tried to concentrate as many clients within a short walking distance as they could, but were also careful not to knock too hard in case they woke up their customer’s neighbors for free. Their services became more in demand as factories increasingly relied on shift work, expecting people to work irregular hours.

Once in the workplace, access to time was often deliberately restricted and could be manipulated by the employer. By removing all visible clocks other than those controlled by the factory, the only person who knew what time the workers had started and how long they’d been going was the factory master. Shaving time off lunch and designated breaks and extending the working day for a few minutes here and there was easily done. As watches started to become more affordable, those who were able to buy them posed an unwelcome challenge to the factory master’s authority.

An account from a mill worker in the mid-nineteenth century describes how: ‘We worked as long as we could see in the summer time, and I could not say what hour it was when we stopped. There was nobody but the master and the master’s son who had a watch, and we did not know the time. There was one man who had a watch … It was taken from him and given into the master’s custody because he had told the men the time of day …’

James Myles tells a similar story: ‘In reality there were no regular hours: masters and managers did with us as they liked. The clocks at factories were often put forward in the morning and back at night, and instead of being instruments for the measurement of time, they were used as cloaks for cheatery and oppression. Though it is known among the hands, all were afraid to speak, and a workman then was afraid to carry a watch, as it was no uncommon event to dismiss anyone who presumed to know too much about the science of Horology.’

Time was a form of social control. Making people start work at the crack of dawn, or even earlier, was seen as an effective way to prevent working-class misbehavior and help them to become productive members of society. As one industrialist explained, ‘The necessity of early rising would reduce the poor to a necessity of going to Bed bedtime; and thereby prevent the Danger of Midnight revels.’ And getting the poor used to temporal control couldn’t start soon enough. Even children’s anarchic sense of the present should be tamed and fitted to schedule. In 1770 English cleric William Temple had advocated that all poor children should be sent from the age of four to workhouses, where they would also receive two hours of schooling a day. He believed that there was:

considerable use in their being, somehow or other, constantly employed for at least twelve hours a day, whether [these four-year-olds] earn their living or not; for by these means, we hope that the rising generation will be so habituated to constant employment that it would at length prove agreeable and entertaining to them ...

Because we all know how entertaining most four-year-olds would find ten hours of hard labor followed by another two of schooling. In 1772, in an essay distributed as a pamphlet entitled A View of Real Grievances, an anonymous author added that this training in the ‘habit of industry’ would ensure that, by the time a child was just six or seven, they would be ‘habituated, not to say naturalized to Labour and Fatigue.’ For those readers with young children looking for further tips, the author offered examples of the work most suited to children of ‘their age and strength’, chief being agriculture or service at sea. Appropriate tasks to occupy them include digging, plowing, hedging, chopping wood and carrying heavy things. What could go wrong with giving a six-year-old an ax or sending them off to join the navy?

The watch industry had its own branch of exploitative child labour in the form of what is known as the Christchurch Fusee Chain Gang. When the Napoleonic Wars caused problems with the supply of fusee chains, most of which came from Switzerland, an entrepreneurial clockmaker from the south coast of England, called Robert Harvey Cox, saw an opportunity. Making fusee chains isn’t complicated, but it is exceedingly fiddly. The chains, similar in design to a bicycle chain, are not much thicker than a horse’s hair, and are made up of links that are each stamped by hand and then riveted together. To make a section of chain the length of a fingertip requires seventy-fi ve or more individual links and rivets; a complete fusee chain can be the length of your hand. One book on watchmaking calls it ‘the worst job in the world’. Cox, however, saw it as perfect labor for the little hands of children and, when the Christchurch and Bournemouth Union Workhouse opened in 1764 down the road from him to provide accommodation for the town’s poor, he knew where to go looking. At its peak, Cox’s factory employed around forty to fifty children, some as young as nine, under the pretext of preventing them from being a financial burden. Their wages, sometimes less than a shilling a week (around £3 today), were paid directly to their workhouse. Days were long and, although they appear to have had some kind of magnification to use, the work could cause headaches and permanent damage to their eyesight. Cox’s factory was followed by others, and Christchurch, this otherwise obscure market town on the south coast, would go on to become Britain’s leading manufacturer of fusee chains right up until the outbreak of the First World War in 1914.

The damage industrial working attitudes to time caused to poor working communities was very real. The combination of long hours of hard labor, in often dangerous and heavily polluted environments, with disease and malnutrition caused by abject poverty, was toxic. Life expectancy in some of the most intensive manufacturing areas of Britain was incredibly low. An 1841 census of the Black Country parish of Dudley in the West Midlands found that the average was just sixteen years and seven months.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-hands-of-time-rebecca-struthers-harper-143034889.html?src=rss