Posts with «author_name|andrew tarantola» label

Generative AI can help bring tomorrow's gaming NPCs to life

Elves and Argonians clipping through walls and stepping through tables, blacksmiths who won’t acknowledge your existence until you take single step to the left, Draugers that drop into rag-doll seizures the moment you put an arrow through their eye — Bethesda’s Elder Scrolls long-running RPG series is beloved for many reasons, the realism of their non-playable characters (NPCs) is not among them. But the days of hearing the same rote quotes and watching the same half-hearted search patterns perpetually repeated from NPCs are quickly coming to an end. It’s all thanks to the emergence of generative chatbots that are helping game developers craft more lifelike, realistic characters and in-game action.

“Game AI is seldom about any deep intelligence but rather about the illusion of intelligence,” Steve Rabin, Principal Software Engineer at Electronic Arts , wrote in the 2017 essay, The Illusion of Intelligence. “Often we are trying to create believable human behavior, but the actual intelligence that we are able to program is fairly constrained and painfully brittle.”

Just as with other forms of media, video games require the player to suspend their disbelief for the illusions to work. That’s not a particularly big ask given the fundamentally interactive nature of gaming, “Players are incredibly forgiving as long as the virtual humans do not make any glaring mistakes,” Rabin continued. “Players simply need the right clues and suggestions for them to share and fully participate in the deception.”

Early days

Take Space Invaders and Pac-Mac, for example. In Space Invaders, the falling enemies remained steadfast on their zig-zag path towards Earth’s annihilation, regardless of the player’s actions, with the only change coming as a speed increase when they got close enough to the ground. There was no enemy intelligence to speak of, only the player’s skill in leading targets would carry the day. Pac-Man, on the other hand, used enemy interactions as a tentpost of gameplay.

Under normal circumstances, the Ghost Gang will coordinate to track and trap The Pac — unless the player gobbled up a Power Pellet before vengefully hunting down Blinky, Pinky, Inky and Clyde. That simple, two-state behavior, essentially a fancy if-then statement in C, proved revolutionary for the nascent gaming industry and became a de facto method of programming NPC reactions for years to come using finite-state machines (FSMs).

Finite-state machines

A finite-state machine is a mathematical model that abstracts a theoretical “machine” capable of existing in any number of states — ally/enemy, alive/dead, red/green/blue/yellow/black — but occupying exclusively one state at a time. It consists, “of a set of states and a set of transitions making it possible to go from one state to another one,” Viktor Lundstrom wrote in 2016’s Human-like decision making for bots in mobile gaming. “A transition connects two states but only one way so that if the FSM is in a state that can transit to another state, it will do so if the transition requirements are met. Those requirements can be internal like how much health a character has, or it can be external like how big of a threat it is facing.”

Like light switches in Half-Life and Fallout, or the electric generators in Dead Island: FSM’s are either on or they’re off or they’re in a rigidly defined alternative state (real world examples would include a traffic light or your kitchen microwave). These machines can transition back and forth between states given the player’s actions but half measures like dimmer switches and low power modes do not exist in these universes. There are few limits on the number of states that an FSM can exist in beyond the logistical challenges of programming and maintaining them all, as you can see with the Ghost Gang’s behavioral flowcharts on Jared Mitchell’s blog post, AI Programming Examples. Lundstrom points out that FSM, “offers lots of flexibility but has the downside of producing a lot of method calls” which tie up additional system resources.

Decision and behavior trees

Alternately, game AIs can be modeled using decision trees. “There are usually no logical checks such as AND or OR because they are implicitly defined by the tree itself,” Lundstrom wrote, noting that the trees “can be built in a non-binary fashion making each decision have more than two possible outcomes.”

Behavior trees are a logical step above that and offer players contextual actions to take by chaining multiple smaller decision actions together. For example, if the character is faced with the task of passing through a closed door, they can either perform the action to turn the handle to open it or, upon finding the door locked, take the “composite action” of pulling a crowbar from inventory and breaking the locking mechanism.

“Behavior trees use what is called a reactive design where the AI tends to try things and makes its decisions from things it has gotten signals from,” Lundstrom explained. “This is good for fast phasing games where situations change quite often. On the other hand, this is bad in more strategic games where many moves should be planned into the future without real feedback.”

GOAPs and RadiantAI

From behavior trees grew GOAPs (Goal-Oriented Action Planners), which we first saw in 2005’s F.E.A.R. An AI agent empowered with GOAP will use the actions available to choose from any number of goals to work towards, which have been prioritized based on environmental factors. “This prioritization can in real-time be changed if as an example the goal of being healthy increases in priority when the health goes down,” Lundstrom wrote. He asserts that they are “a step in the right direction” but suffers the drawback that “it is harder to understand conceptually and implement, especially when bot behaviors come from emergent properties.”

Radiant AI, which Bethesda developed first for Elder Scrolls IV: Oblivion and then adapted to Skyrim, Fallout 3, Fallout 4 and Fallout: New Vegas, operates on a similar principle to GOAP. Whereas NPCs in Oblivion were only programmed with five or six set actions, resulting in highly predictable behaviors, by Skyrim, those behaviors had expanded to location-specific sets, so that NPCs working in mines and lumber yards wouldn’t mirror the movements of folks in town. What’s more, the character’s moral and social standing with the NPC’s faction in Skyrim began to influence the AI’s reactions to the player’s actions. “Your friend would let you eat the apple in his house,” Bethesda Studios creative director Todd Howard told Game Informer in 2011, rather than reporting you to the town guard like they would if the relationship were strained.

Modern AIs

Naughty Dog’s The Last of Us series offers some of today’s most advanced NPC behaviors for enemies and allies alike. “Characters give the illusion of intelligence when they are placed in well thought-out setups, are responsive to the player, play convincing animations and sounds, and behave in interesting ways,” Mark Botta, Senior Software Engineer at Ripple Effect Studios, wrote in Infected AI in The Last of Us. “Yet all of this is easily undermined when they mindlessly run into walls or do any of the endless variety of things that plague AI characters.”

“Not only does eliminating these glitches provide a more polished experience,” he continued, “but it is amazing how much intelligence is attributed to characters that simply don’t do stupid things.”

You can see this in both the actions of enemies, whether they’re human Hunters or infected Clickers, or allies like Joel’s ward, Ellie. The game’s two primary flavors of enemy combatant are built on the same base AI system but “feel fundamentally different” from one another thanks to a “modular AI architecture that allows us to easily add, remove, or change decision-making logic,” Botta wrote.

The key to this architecture was never referring to the enemy character types in the code but rather, “[specifying] sets of characteristics that define each type of character,” Botta said. “For example, the code refers to the vision type of the character instead of testing if the character is a Runner or a Clicker … Rather than spreading the character definitions as conditional checks throughout the code, it centralizes them in tunable data.” Doing so empowers the designers to adjust character variations directly instead of having to ask for help from the AI team.

The AI system is divided into high-level logic (aka “skills”) that dictate the character’s strategy and the low-level “behaviors” that they use to achieve the goal. Botta points to a character’s “move-to behavior” as one such example. So when Joel and Ellie come across a crowd of enemy characters, their approach either by stealth or by force is determined by that character’s skills.

“Skills decide what to do based on the motivations and capabilities of the character, as well as the current state of the environment,” he wrote. “They answer questions like ‘Do I want to attack, hide, or flee?’ and ‘What is the best place for me to be?’” And then once the character/player makes that decision, the lower level behaviors trigger to perform the action. This could be Joel automatically ducking into cover and drawing a weapon or Ellie scampering off to a separate nearby hiding spot, avoiding obstacles and enemy sight lines along the way (at least for the Hunters — Clickers can hear you breathing).

Tomorrow’s AIs

Generative AI systems have made headlines recently due in large part to the runaway success of next-generation chatbots from Google, Meta, OpenAI and others, but they’ve been a mainstay in game design for years. Dwarf Fortress and Black Rock Galactic just wouldn’t be the same without their procedurally generated levels and environments — but what if we could apply those generative principles to dialog creation too? That’s what Ubisoft is attempting with its new Ghostwriter AI.

“Crowd chatter and barks are central features of player immersion in games – NPCs speaking to each other, enemy dialogue during combat, or an exchange triggered when entering an area all provide a more realistic world experience and make the player feel like the game around them exists outside of their actions,” Ubisoft’s Roxane Barth wrote in a March blog post. “However, both require time and creative effort from scriptwriters that could be spent on other core plot items. Ghostwriter frees up that time, but still allows the scriptwriters a degree of creative control.”

The use process isn’t all that different from messing around with public chatbots like BingChat and Bard, albeit with a few important distinctions. The scriptwriter will first come up with a character and the general idea of what that person would say. That gets fed into Ghostwriter which then returns a rough list of potential barks. The scriptwriter can then choose a bark and edit it to meet their specific needs. The system will generate these barks in pairs and selecting one over the other serves as a quick training and refinement method, learning from the preferred choice and, with a few thousand repetitions, begins generating more accurate and desirable barks from the outset.

“Ghostwriter was specifically created with games writers, for the purpose of accelerating their creative iteration workflow when writing barks [short phrases]” Yves Jacquier, Executive Director at Ubisoft La Forge, told Engadget via email. “Unlike other existing chatbots, prompts are meant to generate short dialogue lines, not to create general answers.”

“From here, there are two important differences,” Jacquier continued. “One is on the technical aspect: for using Ghostwriter writers have the ability to control and give input on dialogue generation. Second, and it’s a key advantage of having developed our in-house technology: we control on the costs, copyrights and confidentiality of our data, which we can re-use to further train our own model.”

Ghostwriter’s assistance doesn’t just make scriptwriters’ jobs easier, it in turn helps improve the overall quality of the game. “Creating believable large open worlds is daunting,” Jacquier said. “As a player, you want to explore this world and feel that each character and each situation is unique, and involve a vast variety of characters in different moods and with different backgrounds. As such there is a need to create many variations to any mundane situation, such as one character buying fish from another in a market.”

Writing 20 different iterations of ways to shout “fish for sale” is not the most effective use of a writer’s time. “They might come up with a handful of examples before the task might become tedious,” Jacquier said. “This is exactly where Ghostwriter kicks in: proposing such dialogs and their variations to a writer, which gives the writer more variations to work with and more time to polish the most important narrative elements.”

Ghostwriter is one of a growing number of generative AI systems Ubisoft has begun to use, including voice synthesis and text-to-speech. “Generative AI has quickly found its use among artists and creators for ideation or concept art,“ Jacquier said, but clarified that humans will remain in charge of the development process for the foreseeable future, regardless of coming AI advancements . “Games are a balance of technological innovation and creativity and what makes great games is our talent – the rest are tools. While the future may involve more technology, it doesn’t take away the human in the loop.”

7.4887 billion reasons to get excited

Per a recent Market.us report, the value of generative AI in the gaming market could as much as septuple by 2032. Growing from around $1.1 billion in 2023 to nearly $7.5 billion in the next decade, these gains will be driven by improvements to NPC behaviors, productivity gains by automating digital asset generation and procedurally generated content creation.

And it won’t just be major studios cranking out AAA titles that will benefit from the generative AI revolution. Just as we are already seeing dozens and hundreds of mobile apps built atop ChatGPT mushrooming up on Google Play and the App Store for myriad purposes, these foundational models (not necessarily Ghostwriter itself but its invariable open-source derivative) are poised to spawn countless tools which will in turn empower indie game devs, modders and individual players alike. And given how quickly the need to know how to program in proper code rather than natural language is falling off, our holodeck immersive gaming days could be closer than we ever dared hope.

Catch up on all of the news from Summer Game Fest right here!

This article originally appeared on Engadget at https://www.engadget.com/generative-ai-can-help-bring-tomorrows-gaming-npcs-to-life-163037183.html?src=rss

Screenshots of Instagram's answer to Twitter leak online

After Elon Musk finalized his purchase of Twitter last October, Mark Zuckerberg's Meta reportedly began working on a social media platform of its own, codenamed Project 92. During a company-wide meeting on Thursday, Meta chief product officer Chris Cox showed off a set UI mock-ups to the assembled employees, which were promptly leaked online.

The project's existence was first officially confirmed in March when the company told reporters, "We're exploring a standalone decentralized social network for sharing text updates. We believe there's an opportunity for a separate space where creators and public figures can share timely updates about their interests." A set of design images shared internally in May were leaked online as well.

The new platform, which Cox referred to as “our response to Twitter,” will be a standalone program, based on Instagram and integrating ActivityPub, the same networking protocol that powers Mastodon. The leaked images include a shot of the secure sign in screen; the main feed, which looks suspiciously like Twitter's existing mobile app; and the reply screen. There's no word yet on when the app will be available for public release.

“We’ve been hearing from creators and public figures who are interested in having a platform that is sanely run, that they believe that they can trust and rely upon for distribution,” Cox said, per a Verge report. Celebrities including Oprah and the Dalai Lama have both reportedly been attached to the project.

This article originally appeared on Engadget at https://www.engadget.com/screenshots-of-instagrams-answer-to-twitter-leak-online-212427998.html?src=rss

'John Carpenter's Toxic Commando' brings a co-op apocalypse to PS5, PC and Xbox

From Halloween to The Thing, Christine to They Live, John Carpenter is a modern master of cinematic horror. During Summer Games Fest on Thursday, Focus Home Entertainment and Saber Interactive announced that his unique zompocalyptic vision will be coming to Xbox X|S, the Epic Games Store and Playstation 5 in 2024 with the release of John Carpenter's Toxic Commando.

The game's premise, based on the trailer that debuted on Thursday, is straightforward: you see a zombie, you shoot it until it stops twitching. The plot is equally nuanced, wherein an experiment seeking to draw energy from the Earth's core has instead unleashed an ambulatory zombie plague known as the Sludge God. Players will have to kill it and its unending army of undead monstrosities with an array of melee, edged and ranged weapons, and special abilities. Get ready to rock out with your Glock out because the enemies will be coming at you in hordes. 

Brace yourselves for an explosive, co-op shooter inspired by 80s Horror and Action in John Carpenter's #ToxicCommando!

Drive wicked vehicles unleash mayhem on hordes of monsters to save the world. Time to go commando!

Coming to Epic Games Store in 2024. #SGFpic.twitter.com/mpz1LQFwRX

— Epic Games Store (@EpicGames) June 8, 2023

A firm release date has not yet been set, however the studio did announce that there will be a closed beta offered ahead of its release. If you want to get in on the undead butchery ahead of time, sign up for PC on the on the beta website

Catch up on all of the news from Summer Game Fest right here!

This article originally appeared on Engadget at https://www.engadget.com/john-carpenters-toxic-commando-brings-a-co-op-apocalypse-to-ps5-and-xbox-203721417.html?src=rss

How to build a box fan air filter to escape the Canada wildfire smoke

The east coast is receiving an unwelcome taste (and scent) of the climate crisis to come as smoke from massive Canadian wildfires billow out towards the Atlantic Ocean. Eerily reminiscent of what the West Coast endured in 2020, the skies above New York City this week have turned a hazy orange, setting AQI scores jumping across the five boroughs. New York on Wednesday ranked as having the second worst air quality on Earth behind Delhi.

That haze is a health hazard, especially to anyone dealing with respiratory disease, asthma, high blood pressure, diabetes, is elderly or an infant. It’s not so great even if your lungs work just fine. Luckily, and I mean that in the most relative sense of the word, we’re coming off the peak of a global pandemic spread through aerosolized exhalations, so New York is already well versed in the practice of masking while in public. That’s good, you’re going to need those skills – and any N95s you've still got tucked away – if you set foot outside for the next few days. Goggles too if you have them; fine particulate matter is murder on sensitive eyes.

Unless you reside in a hermetically sealed bro-sized terrarium, the hazy air from outside will eventually make it inside, where the particulate matter can concentrate further. And unless you feel like wearing your N95 non-stop until the firestorm has passed, you’re going to need a way to filter the air in your apartment. 

Sure, you could blow a couple hundred bucks on some model — or you could get together some duct tape, a box fan and some good old American Ingenuity™ to build one of your own.

You’ll need three things for this project:

  • One box fan: Doesn’t matter how big, doesn’t matter how old, doesn’t matter how cheap, just make sure that the side lengths of the fan equal the lengths of the filters, so if you have a 20-inch box fan, get 20-inch filters as well. That way everything fits together evenly and you won’t have weird gaps between the panels.

  • Four AC air filters rated either MERV 13-16 or MPR 1200-2800. These are standardized measures of filter efficiency and indicate that the products can effectively strain 2.5um smoke particles from the ambient air. They’ll even bacteria and viruses if you spring for the higher grade materials.

  • Tape: The duct variety is always a winner, blue painters tape will also do well.

To construct it, place each filter on its end at a right angle to its neighbor so that all four form a square with the arrow indicators on each filter facing inward. Tape all of them together in this shape, making sure to not cover the actual filter bits with tape. Place the fan on top so that it blows air down into the square you just made and secure it with tape. Plug it in and you’re good to go. Fun fact: This also works wonders for covering the smell of intentionally-generated smoke in dorm rooms, not that I would have experience in such shenanigans.

This article originally appeared on Engadget at https://www.engadget.com/how-to-build-a-box-fan-air-filter-to-escape-the-canada-wildfire-smoke-172043312.html?src=rss

How to build a box fan air filter to escape the NYC smog

The East Coast is receiving an unwelcome taste (and scent) of the climate crisis to come as smoke from massive Canadian wildfires billow out towards the Atlantic ocean. Eerily reminiscent of what the West Coast endured in 2020, the skies above New York City this week have turned a hazy orange, setting AQI scores jumping across the five boroughs. New York on Wednesday ranked as having the second worst air quality on Earth behind Delhi.

That haze is a health hazard, especially to anyone dealing with respiratory disease, asthma, high blood pressure, diabetes, is elderly or an infant. It’s not so great even if your lungs work just fine. Luckily, and I mean that in the most relative sense of the word, we’re coming off the peak of a global pandemic spread through aerosolized exhalations, so New York is already well versed in the practice of masking while in public. That’s good, you’re going to need those skills – and any N95s you've still got tucked away – if you set foot outside for the next few days. Goggles too if you have them, fine particulate matter is murder on sensitive eyes.

Unless you reside in a hermetically sealed bro-sized terrarium, the hazy air from outside will eventually make it inside, where the particulate matter can concentrate further. And unless you feel like wearing your N95 non-stop until the firestorm has passed, you’re going to need a way to filter the air in your apartment. 

Sure, you could blow a couple hundred bucks on whatever fancy-pants model Wirecutter is recommending or you could get together some duct tape, a box fan and some good old American Ingenuity™ to build one of your own.

You’ll need three things for this project:

  • One box fan: Doesn’t matter how big, doesn’t matter how old, doesn’t matter how cheap, just make sure that the side lengths of the fan equal the lengths of the filters, so if you have a 20” box fan, get 20” filters as well. That way everything fits together evenly and you won’t have weird gaps between the panels.

  • Four AC air filters rated either MERV 13-16 or MPR 1200-2800. These are standardized measures of filter efficiency and indicate that the products can effectively strain 2.5um smoke particles from the ambient air. They’ll even bacteria and viruses if you spring for the higher grade materials.

  • Tape: The duct variety is always a winner, blue painters tape will also do well.

To construct it, place each filter on its end at a right angle to its neighbor so that all four form a square with the arrow indicators on each filter facing inward. Tape all of them together in this shape, making sure to not cover the actual filter bits with tape. Place the fan on top so that it blows air down into the square you just made and secure it with tape. Plug it in and you’re good to go. Fun fact: This also works wonders for covering the smell of intentionally-generated smoke in dorm rooms, not that I would have experience in such shenanigans.

This article originally appeared on Engadget at https://www.engadget.com/how-to-build-a-box-fan-air-filter-to-escape-the-nyc-smog-172043334.html?src=rss

Where was all the AI at WWDC?

With its seal broken by the release of ChatGPT last November, generative AI has erupted into mainstream society with a ferocity not seen since Pandora’s famous misadventure with the misery box. The technology is suddenly everywhere, with startups and industry leaders alike scrambling to tack this smart feature du jour onto their existing code stacks and shoehorn the transformational promise of machine-generated content into their every app. At this point in the hype cycle you’d be a fool not to shout your genAI accomplishments from the rooftops; it’s quickly become the only way to be heard above the din from all the customizable chatbot and self-producing Powerpoint slide sellers flooding the market.

If Google’s latest I/O conference or Meta’s new dedicated development team were any indication, the tech industry’s biggest players are also getting ready to go all in on genAI. Google’s event was focused on the company’s AI ambitions surrounding Bard and PaLM 2, perhaps even to the detriment of the announced hardware including the Pixel Fold and 7a phones, and Pixel Tablet. From Gmail’s Smart Compose features to Camera’s Real Tone and Magic Editor, Project Tailwind to the 7a’s generative wallpapers, AI was first on the lips of every Alphabet executive to take the Shoreline stage.

If you’d been drinking two fingers every time Apple mentioned it during its WWDC 2023 keynote, however, you’d be stone sober.

Zero — that’s the number of times that an on-stage presenter uttered the phrase “artificial intelligence” at WWDC 2023. The nearest we got to AI was 'Air' and the term “machine learning” was said exactly seven times. I know because I had a chatbot count for me.

That’s not to say Apple isn’t investing heavily into AI research and development. The products on display during Tuesday’s keynote were chock full of the tech. The “ducking autocorrect” features are empowered by on-device machine learning, as are the Lock Screen live video (which uses it to synthesize interstitial frames) and the new Journal app’s inspirational personalized writing prompts. The PDF autofill features rely on machine vision systems to understand which fields go where — the Health Apps’ new myopia test does too, just with your kid’s screen distance — while AirPods now tailor your playback settings based on your preferences and prevailing environmental conditions. All thanks to machine learning systems.

It's just, Apple didn’t talk about it. At least, not directly.

Even when discussing the cutting-edge features in the new Vision Pro headset — whether it’s the natural language processing that goes into its voice inputs, audio ray tracing, the machine-vision black magic or that real-time hand gesture tracking and Optic ID entail — the discussion remained centered on what the headset features can do for users. Not what the headset could do for the state of the art or the race for market superiority.

The closest Apple got during the event to openly describing the digital nuts and bolts that constitute its machine learning systems was its description of the Vision Pro’s Persona feature. With the device’s applications skewing hard toward gaming, entertainment and communication, there was never a chance that we’d get through this without having to make FaceTime calls with these strapped to our heads. Since a FaceTime call where everybody is hidden behind a headset would defeat the purpose of having a video call, Apple is instead leveraging a complex machine learning system to digitally recreate the Vision Pro wearer’s head, torso, arms and hands — otherwise known as their “Persona.”

“After a quick enrollment process using the front sensors on vision pro, the system uses an advanced encoder decoder, neural network to create your digital persona,” Mike Rockwell, VP of Apple’s Technology Development Group, said during the event. “This network was trained on a diverse group of thousands of individuals, it delivers a natural representation which dynamically matches your facial and hand movement.”

AI was largely treated as an afterthought throughout the event rather than a selling point, much to Apple’s benefit. In breaking from the carnival-like atmosphere currently surrounding generative AI developments, Apple not only maintains its aloof and elite branding, it also distances itself from Google’s aggressive promotion of the technology, and also eases skittish would-be buyers into the joys of face-mounted hardware.

Steve Jobs often used the phrase “it just works” to describe the company’s products — implying that they were meant to solve problems, not create additional hassle for users — and it would appear that Apple has rekindled that design philosophy at the dawn of the spatial computing era. In our increasingly dysfunctional, volatile and erratic society, the promise of simplicity and reliability, of something, anything, working as advertised, could be just what Apple needs to get buyers to swallow the Vision Pro’s $3,600 asking price.

This article originally appeared on Engadget at https://www.engadget.com/where-was-all-the-ai-at-wwdc-163048564.html?src=rss

Apple details visionOS, the software that powers the Vision Pro headset

Apple's Vision Pro mixed reality headset will run on visionOS, company executives announced following the bombshell reveal of its long-rumored wearable at WWDC 2023. The operating system, internally codenamed "Oak," has reportedly been in development since 2017. It's existence further leaked via source code references last February

Vision Pro's applications will skew hard towards gaming, media consumption, and communication and will offer Apple apps like Messages, FaceTime and Apple Arcade. Apple is already working with a number of media companies to bring their products and content into the new Vision Pro ecosystem. This includes Disney which, as part of its 100th anniversary celebration, announced Monday that it will bring immersive features to Disney+ content, "by combining extraordinary creativity with groundbreaking technology," Disney CEO Bob Iger said. "Disney+ will be available 'day one,' [of the headset's availability]." It appears that ESPN content won't be far behind, based on the few glimpses we saw during the demo.

Apple's announcement comes just days after rival Meta unveiled its own mixed reality headset, the Quest 3

This is a developing story. Please check back for updates.

This article originally appeared on Engadget at https://www.engadget.com/apple-realityos-ar-vr-headset-operating-system-wwdc-2023-185901735.html?src=rss

Hitting the Books: Why we like bigger things better

We Americans love to have ourselves a big old time. It's not just our waistlines that have exploded outward since the post-WWII era. Our houses have grown larger, as have the appliances within them, the vehicles in their driveways, the income inequalities between ourselves and our neighbors, and the challenges we face on a rapidly warming planet. In his new book, Size: How It Explains the World, Dr. Vaclav Smil, Distinguished Professor Emeritus at the University of Manitoba, takes readers on a multidiscipline tour of the social quirks, economic intricacies, and biological peculiarities that result from our function following our form.

William Morrow

From SIZE by Vaclav Smil. Copyright 2023 by Vaclav Smil. Reprinted courtesy of William Morrow, an imprint of HarperCollins Publishers.


Modernity’s Infatuation With Larger Sizes

A single human lifetime will have witnessed many obvious examples of this trend in sizes. Motor vehicles are the planet’s most numerous heavy mobile objects. The world now has nearly 1.5 billion of them, and they have been getting larger: today’s bestselling pickup trucks and SUVs are easily twice or even three times heavier than Volkswagen’s Käfer, Fiat’s Topolino, or Citroën’s deux chevaux — family cars whose sales dominated the European market in the early 1950s.

Sizes of homes, refrigerators, and TVs have followed the same trend, not only because of technical advances but because the post–Second World War sizes of national GDPs, so beloved by the growth-enamored economists, have grown by historically unprecedented rates, making these items more affordable. Even when expressed in constant (inflation-adjusted) monies, US GDP has increased 10-fold since 1945; and, despite the postwar baby boom, the per capita rate has quadrupled. This affluence-driven growth can be illustrated by many other examples, ranging from the heights of the highest skyscrapers to the capacity of the largest airplanes or the multistoried cruise ships, and from the size of universities to the size of sports stadiums. Is this all just an expected, inevitable replication of the general evolutionary trend toward larger size?

We know that life began small (at the microbial level as archaea and bacteria that emerged nearly 4 billion years ago), and that, eventually, evolution took a decisive turn toward larger sizes with the diversification of animals during the Cambrian period, which began more than half a billion years ago. Large size (increased body mass) offers such obvious competitive advantages as increased defense against predators (compare a meerkat with a wildebeest) and access to a wider range of digestible biomass, outweighing the equally obvious disadvantages of lower numbers of offspring, longer gestation periods (longer time to reach maturity), and higher food and water needs. Large animals also live (some exceptions aside — some parrots make it past 50 years!) longer than smaller ones (compare a mouse with a cat, a dog with a chimpanzee). But at its extreme the relationship is not closely mass-bound: elephants and blue whales do not top the list; Greenland sharks (more than 250 years), bowhead whales (200 years), and Galapagos tortoises (more than 100 years) do.

The evolution of life is, indeed, the story of increasing size — from solely single-celled microbes to large reptiles and modern African megafauna (elephants, rhinos, giraffes). The maximum body length of organisms now spans the range of eight orders of magnitude, from 200 nanometers (Mycoplasma genitalium) to 31 meters (the blue whale, Balaenoptera musculus), and the extremes of biovolume for these two species range from 8 × 10^12 cubic millimeters to 1.9 × 10^11 cubic millimeters, a difference of about 22 orders of magnitude.

The evolutionary increase in size is obvious when comparing the oldest unicellular organisms, archaea and bacteria, with later, larger, protozoans and metazoans. But average biovolumes of most extinct and living multicellular animals have not followed a similar path toward larger body sizes. The average sizes of mollusks and echinoderms (starfish, urchins, sea cucumbers) do not show any clear evolutionary trend, but marine fish and mammals have grown in size. The size of dinosaurs increased, but then diminished as the animals approached extinction. The average sizes of arthropods have shown no clear growth trend for half a billion years, but the average size of mammals has increased by about three orders of magnitude during the past 150 million years.

Analyses of living mammalian species show that subsequent generations tend to be larger than their parents, but a single growth step is inevitably fairly limited. In any case, the emergence of some very large organisms has done nothing to diminish the ubiquity and importance of microbes: the biosphere is a highly symbiotic system based on the abundance and variety of microbial biomass, and it could not operate and endure without its foundation of microorganisms. In view of this fundamental biospheric reality (big relying on small), is the anthropogenic tendency toward objects and design of larger sizes an aberration? Is it just a temporary departure from a long-term stagnation of growth that existed in premodern times as far as both economies and technical capabilities were concerned, or perhaps only a mistaken impression created by the disproportionate attention we pay nowadays to the pursuit and possession of large-size objects, from TV screens to skyscrapers?

The genesis of this trend is unmistakable: size enlargements have been made possible by the unprecedented deployment of energies, and by the truly gargantuan mobilization of materials. For millennia, our constraints — energies limited to human and animal muscles; wood, clay, stone, and a few metals as the only choices for tools and construction — circumscribed our quest for larger-designed sizes: they determined what we could build, how we could travel, how much food we could harvest and store, and the size of individual and collective riches we could amass. All of that changed, rather rapidly and concurrently, during the second half of the 19th century.

At the century’s beginning, the world had very low population growth. It was still energized by biomass and muscles, supplemented by flowing water turning small wheels and wind-powering mills as well as relatively small ships. The world of 1800 was closer to the world of 1500 than it was to the mundane realities of 1900. By 1900, half of the world’s fuel production came from coal and oil, electricity generation was rapidly expanding, and new prime movers—steam engines, internal combustion engines, steam and water turbines, and electric motors—were creating new industries and transportation capabilities. And this new energy abundance was also deployed to raise crop yields (through fertilizers and the mechanization of field tasks), to produce old materials more affordably, and to introduce new metals and synthetics that made it possible to make lighter or more durable objects and structures.

This great transformation only intensified during the 20th century, when it had to meet the demands of a rapidly increasing population. Despite the two world wars and the Great Depression, the world’s population had never grown as rapidly as it did between 1900 and 1970. Larger sizes of everything, from settlements to consumer products, were needed both to meet the growing demand for housing, food, and manufactured products and to keep the costs affordable. This quest for larger size—larger coal mines or hydro stations able to supply distant megacities with inexpensive electricity; highly automated factories producing for billions of consumers; container vessels powered by the world’s largest diesel engines and carrying thousands of steel boxes between continents—has almost invariably coincided with lower unit costs, making refrigerators, cars, and mobile phones widely affordable. But it has required higher capital costs and often unprecedented design, construction, and management efforts.

Too many notable size records have been repeatedly broken since the beginning of the 20th century, and the following handful of increases (all quantified by 1900–2020 multiples, calculated from the best available information) indicate the extent of these gains. Capacity of the largest hydroelectricity-generating station is now more than 600 times larger than it was in 1900. The volume of blast furnaces — the structures needed to produce cast iron, modern civilization’s most important metal — has grown 10 times, to 5,000 cubic meters. The height of skyscrapers using steel skeletons has grown almost exactly nine times, to the Burj Khalifa’s 828 meters. Population of the largest city has seen an 11-fold increase, to Greater Tokyo’s 37 million people. The size of the world’s largest economy (using the total in constant monies): still that of the US, now nearly 32 times larger.

But nothing has seen a size rise comparable to the amount of information we have amassed since 1900. In 1897, when the Library of Congress moved to its new headquarters in the Thomas Jefferson Building, it was the world’s largest depository of information and held about 840,000 volumes, the equivalent of perhaps no more than 1 terabyte if stored electronically. By 2009 the library had about 32 million books and printed items, but those represented only about a quarter of all physical collections, which include manuscripts, prints, photographs, maps, globes, moving images, sound recordings, and sheet music, and many assumptions must be made to translate these holdings into electronic storage equivalents: in 1997 Michael Lesk estimated the total size of the Library’s holdings at “perhaps about 3 petabytes,” and hence at least a 3,000-fold increase in a century.

Moreover, for many new products and designs it is impossible to calculate the 20th-century increases because they only became commercialized after 1900, and subsequently grew one, two, or even three orders of magnitude. The most consequential examples in this category include passenger air-travel (Dutch KLM, the first commercial airline, was established in 1919); the preparation of a wide variety of plastics (with most of today’s dominant compounds introduced during the 1930s); and, of course, advances in electronics that made modern computing, telecommunications, and process controls possible (the first vacuum-tube computers used during the Second World War; the first microprocessors in 1971). While these advances have been creating very large numbers of new, small companies, increasing shares of global economic activity have been coming from ever-larger enterprises. This trend toward larger operating sizes has affected not only traditional industrial production (be it of machinery, chemicals, or foods) and new ways of automated product assembly (microchips or mobile phones), but also transportation and a wide range of services, from banks to consulting companies.

This corporate aggrandization is measurable from the number and the value of mergers, acquisitions, alliances, and takeovers. There was a rise from fewer than 3,000 mergers — worth in total about $350 billion — in 1985 to a peak of more than 47,000 mergers worth nearly $5 trillion in 2007, and each of the four pre-COVID years had transactions worth more than $3 trillion. Car production remains fairly diversified, with the top five (in 2021 by revenue: Volkswagen, Toyota, Daimler, Ford, General Motors) accounting for just over a third of the global market share, compared to about 80 percent for the top five mobile phone makers (Apple, Samsung, Xiaomi, Huawei, Oppo) and more than 90 percent for the Boeing–Airbus commercial jetliner duopoly.

But another size-enlarging trend has been much in evidence: increases in size that have nothing to do with satisfying the needs of growing populations, but instead serve as markers of status and conspicuous consumption. Sizes of American houses and vehicles provide two obvious, and accurately documented, examples of this trend, and while imitating the growth of housing has been difficult in many countries (including Japan and Belgium) for spatial and historical reasons, the rise of improbably sized vehicles has been a global trend.

A Ford Model T — the first mass-produced car, introduced in 1908 and made until 1927 — is the obvious baseline for size comparisons. The 1908 Model T was a weakly powered (15 kilowatts), small (3.4 meters), and light (540 kilograms) vehicle, but some Americans born in the mid-1920s lived long enough to see the arrival of improbably sized and misleadingly named sports utility vehicles that have become global favorites. The Chevrolet Suburban (265 kilowatts, 2,500 kilograms, 5.7 meters) wins on length, but Rolls Royce offers a 441-kilowatt Cullinan and the Lexus LX 570 weighs 2,670 kilograms.

These size gains boosted the vehicle-to-passenger weight ratio (assuming a 70-kilogram adult driver) from 7.7 for the Model T to just over 38 for the Lexus LX and to nearly as much for the Yukon GMC. For comparison, the ratio is about 18 for my Honda Civic — and, looking at a few transportation alternatives, it is just over 6 for a Boeing 787, no more than 5 for a modern intercity bus, and a mere 0.1 for a light 7-kilogram bicycle. Remarkably, this increase in vehicle size took place during the decades of heightened concern about the environmental impact of driving (a typical SUV emits about 25 percent more greenhouse gases than the average sedan).

This American preference for larger vehicles soon became another global norm, with SUVs gaining in size and expanding their market share in Europe and Asia. There is no rational defence of these extravaganzas: larger vehicles were not necessitated either by concerns for safety (scores of small- and mid-size cars get top marks for safety from the Insurance Institute for Highway Safety) or by the need to cater to larger households (the average size of a US family has been declining).

And yet another countertrend involving the shrinking size of American families has been the increasing size of American houses. Houses in Levittown, the first post–Second World War large-scale residential suburban development in New York, were just short of 70 square meters; the national mean reached 100 in 1950, topped 200 in 1998, and by 2015 it was a bit above 250 square meters, slightly more than twice the size of Japan’s average single-family house. American house size has grown 2.5 times in a single lifetime; average house mass (with air conditioning, more bathrooms, heavier finishing materials) has roughly tripled; and the average per capita habitable area has almost quadrupled. And then there are the US custom-built houses whose average area has now reached almost 500 square meters.

As expected, larger houses have larger refrigerators and larger TV screens. Right after the Second World War, the average volume of US fridges was just 8 cubic feet; in 2020 the bestselling models made by GE, Maytag, Samsung, and Whirlpool had volumes of 22–25 cubic feet. Television screens started as smallish rectangles with rounded edges; their dimensions were limited by the size and mass of the cathode-ray tube (CRT). The largest CRT display (Sony PVM-4300 in 1991) had a 43-inch diagonal display but it weighed 200 kilograms. In contrast, today’s popular 50-inch LED TV models weigh no more than 25 kilograms. But across the globe, the diagonals grew from the post–Second World War standard of 30 centimeters to nearly 60 centimeters by 1998 and to 125 centimeters by 2021, which means that the typical area of TV screens grew more than 15-fold.

Undoubtedly, many larger sizes make life easier, more comfortable, and more enjoyable, but these rewards have their own limits. And there is no evidence for concluding that oversize houses, gargantuan SUVS, and commercial-size fridges have made their owners happier: surveys of US adults asked to rate their happiness or satisfaction in life actually show either no major shifts or long-term declines since the middle of the 20th century. There are obvious physical limits to all of these excesses, and in the fourth chapter I will examine some important long-term growth trends to show that the sizes of many designs have been approaching their inevitable maxima as S-shaped (sigmoid) curves are reaching the final stages of their course.

This new, nearly universal, worship of larger sizes is even more remarkable given the abundance of notable instances when larger sizes are counterproductive. Here are two truly existential examples. Excessive childhood weight is highly consequential because the burden of early onset obesity is not easily shed later in life. And on the question of height, armies have always had height limits for their recruits; a below-average size was often a gift, as it prevented a small man (or a very tall one!) getting drafted and killed in pointless conflicts.

Large countries pose their own problems. If their territory encompasses a variety of environments, they are more likely to be able to feed themselves and have at least one kind of major mineral deposit, though more often several. This is as true of Russia (the world’s largest nation) as it is of the USA, Brazil, China, and India. But nearly all large nations tend to have larger economic disparities than smaller, more homogeneous countries do, and tend to be riven by regional, religious, and ethnic differences. Examples include the NorthSouth divide in the US; Canada’s perennial Quebec separatism; Russia’s problems with militant Islam (the Chechen war, curiously forgotten, was one of the most brutal post–Second World War conflicts); India’s regional, religious, and caste divisions. Of course, there are counterexamples of serious disparities and discord among small-size nations — Belgium, Cyprus, Sri Lanka — but those inner conflicts matter much less for the world at large than any weakening or unraveling of the largest nations.

But the last 150 years have not only witnessed a period of historically unprecedented growth of sizes, but also the time when we have finally come to understand the real size of the world, and the universe, we inhabit. This quest has proceeded at both ends of the size spectrum, and by the end of the 20th century we had, finally, a fairly satisfactory understanding of the smallest (at the atomic and genomic levels) and the largest (size of the universe) scale. How did we get there?

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-size-vaclav-smil-william-morrow-143020501.html?src=rss

Hitting the Books: Renee Descartes had his best revelations while baked in an oven

Some of us do our best thinking in the shower, others do it while on the toilet. Renee Descartes, he pondered most deeply while ensconced in a baker's oven. The man simply needed to be convinced of the oven's existence before climbing in. Such are the quirks of the most monumental minds humanity has to offer. In the hilarious and enthralling new book, Edison's Ghosts: The Untold Weirdness of History's Greatest Geniuses, Dr. Katie Spalding explores the illogical, unnerving, and sometimes downright strange behaviors of luminaries like Thomas "Spirit Phone" Edison, Isaac "Sun Blind" Newton, and Nicola "I fell in love with a pigeon" Tesla. 

Little Brown and Company

Excerpted from Edison's Ghosts: The Untold Weirdness of History's Greatest Geniuses by Dr. Katie Spalding. Published by Little, Brown and Company. Copyright © 2023 by Katie Spalding. All rights reserved.


When René Descartes Got Baked

René Descartes, like Pythagoras before him and Einstein after, occupies that special place in our collective consciousness where his work has become … well, essentially a short-hand for genius-level intellect. Think about it – in any cartoon or sitcom where one character is (or, through logically-spurious means, suddenly becomes) a brainiac, there are three things they’re narratively bound to say: ‘the square of the hypotenuse is equal to the sum of the squares of the other two sides’ – that’s Pythagoras; ‘E = mc2’ – thank you, Einstein; and finally, ‘cogito ergo sum’. And that is Descartes.

Specifically, it’s old Descartes – Descartes after he had figured his shit out. But while his later writings undeniably played a huge and important role in setting up how we approach the world today – he’s actually one of the main figures who brought us the concept of the scientific method – Descartes’s early years leaned a little more on the silly and gullible than the master of scepticism he’s come to be known as.

Descartes was born in 1596, which places him firmly in that period where science and philosophy and magic were all pretty much the same thing. He’s probably best known as a philosopher these days, but that’s likely because a lot of his developments in mathematics have become so incredibly fundamental that we kind of forget they had to be invented by anybody at all. And I know I’m saying that with ten years of mathematical training behind me and a PhD on the shelf, but even if you haven’t set foot in a maths class since school, you’ll be familiar with something that Descartes invented, because he was the guy who came up with graphs. That’s actually why the points in a graph are given by Cartesian coordinates – it’s from the Latin form of his name, Renatus Cartesius.

And while maths, despite what everyone keeps telling me, can be sexy, ‘cogito ergo sum’ really does have a nice ring to it, doesn’t it? ‘I think, therefore I am.’ It doesn’t sound like a huge philosophical leap – in fact, it kind of sounds like tautological nonsense – but it’s actually one of the most important conclusions ever reached in Western thought.

See, before Descartes, philosophy didn’t exactly have the sort of wishy-washy, pie-in-the-sky reputation it enjoys today. The dominant school of thought was Scholasticism, which was basically like debate club mixed with year nine science. Sounds fair enough, but in practice – and especially when combined with the strong religious atmosphere and general lack of science up till that point – it was basically a long period of everybody riffing on Plato and Aristotle and trying to make their Ancient Greek teachings match up with the Bible. This was, needless to say, not always easy, and led to rather a lot of navel gazing over questions like ‘Do demons get jealous?’ and ‘Do angels take up physical space?’

Descartes’s approach was radically different. He didn’t see the point in answering questions like how many angels can dance on the head of a pin until he’d been properly convinced of the existence of angels. And dancing. And pins.

Now, of course, this is the point when non-philosophers throw up their hands in despair and say something along the lines of ‘Of course pins exist, you idiot, I have some upstairs keeping my posters up! Jesus, René, are we really paying a fortune in university fees just so you can sit around and doubt the existence of stationery?’

But to that, Descartes would reply: are you sure? I mean, we’ve all had dreams before that are so convincing that we wake up thinking we really did adopt a baby elephant after our teeth all fell out. How do I know I’m not dreaming now? How do I know this isn’t a The Matrix-type situation, and what you think are pins are just a trick being played on us by Agent Smith?

In fact, when you get right down to it, Descartes would say, how can we be sure anything exists? I might not even exist! I might be a brain in a vat, being cleverly stimulated in such a way as to induce a vast hallucination! And yes, sure, I agree that sounds unlikely, but it’s not impossible – the point is, we simply can’t know.

The only thing I can be sure of, Descartes would continue – despite everyone by this point rolling their eyes and muttering things like ‘see what you started, Bill’ – is that I exist. And I can be sure of that, because I’m thinking these thoughts about what exists. I may just be a brain in a vat, being fed lies about the reality that surrounds me, but ‘I’, ‘me’, my sense of self and consciousness – that definitely exists. To summarise: I think – therefore I am.

It was a hell of a breakthrough – he’d basically Jenga’d the entire prevailing worldview into obsolescence. And it’s the kind of idea that could really only have come from someone like Descartes: a weirdo celebrity heretic pseudo-refugee who had a weakness for cross-eyed women, weed and conspiracy theories.

Descartes was, as his name suggests, French by birth, hailing from a small town vaguely west of the centre of the country. If you look it up on a map, you’ll see it’s actually called Descartes, but it’s not some uncanny coincidence – the town was renamed in 1967 after its most famous resident.

Which is kind of odd, because it’s not like Descartes spent all that much time there. He went to school in La Flèche, more than 100km away, where even at the tender age of ten he was displaying the sort of behaviour that would make him perfectly suited to a life of philosophy, sleeping in until lunch every day and only attending lectures when he felt like it. This can’t have made him all that popular with the other kids, who were all expected to get up before 5am, but that’s why you choose a school whose rector is a close family friend, I suppose, and, in any case, by the time the young René turned up they were probably all too tired to do much about it.

After finishing high school, he spent a couple of years at uni studying law, as per his father’s wishes – his dad came from a less well-to-do branch of the Descartes family tree, and probably would have wanted Descartes to keep up appearances for the sake of holding on to posh perks like not paying taxes. It must have pained him, therefore, when after graduating with a Licence in both church and civil law, Descartes immediately gave it all up and went on an extended gap year. ‘As soon as my age permitted me to pass from under the control of my instructors, I entirely abandoned the study of letters, and resolved no longer to seek any other science than the knowledge of myself, or of the great book of the world,’ he would later write, like some kind of nineteen-year-old Eat Pray Love devotee.

‘I spent the remainder of my youth in travelling, in visiting courts and armies, in holding intercourse with men of different dispositions and ranks, [and] in collecting varied experience,’ he continued, in his philosophical treatise-slash-autobiography Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences, which for obvious time-saving reasons is usually referred to as Discourse on the Method. Andlike so many philosophy students throughout history, there was one place he found in his travels that caught Descartes’s heart and imagination more than anywhere else: Amsterdam.

Now, it is of course true that places can change a lot over the course of 400 years – at this point in history, France was being ruled by a nine-year-old autocrat and his mum, Germany didn’t exist, and England was a few years short of becoming a Republic. So you might think, sure, these days Amsterdam has a bit of a reputation, but back in Descartes’s time, it was probably a hub of quiet intellectualism and sombre, clean living.

Nope! Dynasties may rise and fall, empires spread and eventually fracture, but apparently, Amsterdam has always been Amsterdam. Descartes spent his first few years in the city living his absolute best life, studying engineering and maths under the direction of Simon Stevin – another guy you’ve never heard of who made a mathematical breakthrough you almost certainly use every single day of your life, since he invented the decimal point – and dressing like an emo and throwing himself into music. He joined the Dutch army for a bit, despite being by all accounts a tiny weedy bobble-headed French guy, and, yes, he almost certainly smoked a bunch of pot along the way.

And then, one November night in 1619, while on tour in Bavaria, Descartes had a Revelation. And he had it, according to his near-contemporary biographer Adrien Baillet, inside an oven.

‘He found himself in a place so remote from Communication, and so little frequented by people, whose Conversation might afford him any Diversion, that he even procured himself such a privacy, as the condition of his Ambulatory Life could permit him,’ Baillet writes.

‘Not … having by good luck any anxieties, nor passions, within, that were capable of disturbing him, he staid withal all the Day long in his stove, where he had leisure enough to entertain himself with his thoughts,’ he continues, as if that’s a normal thing to write and not an account of someone being so introverted that they secluded themselves miles away from anyone who knew them and then crawled into an oven for the day.

Modern biographers have suggested a few interpretations of what this oven might have been, and I’m sorry to report that, of course, it’s not as ridiculous as it first seems: in the seventeenth century, before we’d tamed electricity and gas mains and whatnot, a ‘stove’ or ‘oven’ was more like your modern-day airing cupboard than an Aga. Just bigger. And fancier. And all your towels are on fire. Look, the analogy isn’t perfect, but the point is that when Descartes said, in Discourse on the Method, that he had ‘spent all day entertaining his thoughts in an oven’, he wasn’t being completely absurd – just, you know, kind of weird.

Depending on where you fall on the scale between ‘Descartes was a stoner lol’ and ‘Descartes was a paragon of virtue, 10/10 no notes awesome dude’, what happened next was either the result of too much weed, too much oven, or too much being a fricking genius destined to reform all of Western philosophy. Either way, he had a pretty rough night, full of strange dreams and disturbing hallucinations* that even the loyal Baillet thought might be a sign he was going a little bonkers.

‘He acquaints us, That on the Tenth of November 1619, laying himself down Brim-full of Enthusiasm, and … having found that day the Foundations of the wonderful Science, he had Three dreams one presently after another; yet so extraordinary, as to make him fancy that they were sent him from above,’ writes Baillet, just in case you were wondering where on that scale Descartes would put himself. In fact, so sure was he of the divine nature of his dreams that, Baillet said, ‘a Man would have been apt to have believed that he had been a little Crack-brain’d, or that he might have drank a Cup too much that Evening before he went to Bed.

‘It was indeed, St. Martin’s Eve, and People used to make Merry that Night in the place where he was … but he assures us, that he had been very Sober all that Day, and that Evening too and that he had not touched a drop of Wine for Three Weeks together.’

Sure, René. Though honestly, the content of the dreams aren’t as noteworthy as the conclusions he drew from them – unless you think ‘walking through a storm to collect a melon from a guy’ is super weird, I guess. And goodness knows how he got from cantaloupe to conceptualism, but these three dreams are said to have given him the inspiration first for analytic geometry – that is, his maths stuff – and then the realisation that he could apply the same kind of logical rigour to philosophy. And I don’t want to minimise what Descartes achieved after this melon-based enlightenment – it takes guts to stand up in a world governed by strict ritual and belief and announce that not only is everyone around you an idiot, but also they probably don’t even exist, so there. But have you ever heard that saying about not being so open-minded that your brain falls out?

Well, 1619 was also the year that Descartes, writing under the pseudonym ‘Polybius Cosmopolitanus’ – Polybius being an ancient Greek historian, and Cosmopolitanus being Latin for ‘citizen of the world’ – released the Mathematical Thesaurus of Polybius Cosmopolitanus. It kind of sounds like a Terry Gilliam movie, but it was actually a proposal for a way to reform mathematics as a whole.

It doesn’t matter that you’ve never heard of it. It’s not as famous as the Discourse; in fact, it may not have ever even been completed. The important bit wasn’t what was contained inside the book, but who it was dedicated to: to ‘learned men throughout the world, and especially to the F.R.C. very famous in G[ermany].’

And who was this mysterious F.R.C? Descartes was specifically referencing the Frères de la Rose Croix. In English, they were known as the Brothers of the Rosy Cross – and, today, they’re called the Rosicrucians. So, you may have heard of the Rosicrucians, but it’s more likely you haven’t. Today, the term actually refers to two separate organisations, both of which claim to be the ‘real’ Rosicrucians and both of which denounce the other group as being a bunch of weirdos. They’re equally wrong on the first point, and equally right on the second: there’s no Rosicrucian group around today that is directly linked to the original group that Descartes was a fan of, and every iteration of the organisation is and always has been fucking bananas.

But people in search of a new outlook on the universe often don’t get to choose which batshit philosophy the world throws at them first, and Descartes had the peculiar fortune of going through his minor mental breakdown in early seventeenth-century Germany.

Between 1614 and 1616, three ‘manifestos’ were published in Germany. They were anonymous, recounting the tale of one Christian Rosenkreuz, a man who was born in 1378, travelled across the world, studied under Sufi mystics in the Middle East, came back to Europe to spread the knowledge he had gained in his travels, was rejected by Western scientists and philosophers, and so founded the Rosicrucian Order, a grand name for what was apparently a group of about eight nerdy virgins. All of this, the manifestos said, he accomplished by the age of about twenty-nine, after which he presumably just sat on his thumbs for a long old while since the next big thing he’s said to have done was die aged 106.

Now, some people have posited that everything you just read is false – a kind of early modern conspiracy theory. And yes, ‘Christian Rose-Cross’, as the name translates from German, is rather on the nose for the founder of a Christian sect, and, yes, it’s a bit farfetched for anybody to have lived for more than a century in the 1400s, and, yes, OK, so the last manifesto was almost certainly actually written by a German theologian named Johann Valentin Andreae, who was attempting to take the piss out of the whole thing and publicly renounced it when he realised people were taking him seriously – but that’s the thing: people did take it seriously. And one of the people who took it seriously seems to have been Descartes.

‘There is a single active power in things: love, charity, harmony,’ mused the philosopher most famous for radical doubt of everything that couldn’t be proved via logic alone. Not in any published work – these were the thoughts of Descartes the early-twenties guy just trying to figure his shit out, found years later in the journal he kept throughout his life.

Another: ‘The wind signifies spirit; movement with the passage of time signifies life; light signifies knowledge; heat signifies love; and instantaneous activity signifies creation. Every corporeal form acts through harmony. There are more wet things than dry things, and more cold things than hot, because if this were not so, the active elements would have won the battle too quickly and the world would not have lasted long.’

If that sounds, you know, completely ridiculous to you, that’s probably because we live in a post-Descartes world, and he didn’t. All this poor oven-baked idiot had at his disposal were a dream about melons, a steadfast conviction that he had been personally chosen by God to reform the entirety of Western thought up until that point, and some rumours about a weird sect of rosy German virgins who were devoted to doing just that.

You may have already guessed the next bit of the story: Descartes joins the Rosicrucians and embarks on some insane rituals and philosophies that we’ve never heard of today because it doesn’t fit in with our modern ideas of ‘genius’, right?

It’s actually way more stupid than that. In a series of events that, once again, really feels like it was ripped straight out of some cult comedy movie, Descartes tried to join the Rosicrucians, but kept running into the problem of them not, in fact, existing. So he couldn’t join the group, but what he could and did do was accidentally make everyone think he had joined, thus entirely screwing over his reputation as someone to take seriously.

Of course, in the grand scheme of things, this didn’t matter much, because to a lot of people he was dangerous enough even without all the conspiracy stuff: his insistence that truth was something for humans, not God, to judge, and the idea that authority should or even could be questioned, made him an enemy of most established Churches, so much so that he eventually published an extremely circular and nonsensical ‘proof’ of God’s existence to try to placate his attackers.

The irony was that Descartes knew God existed – otherwise who had told him to transform philosophy and mathematics via the medium of melons? And, ultimately, as hubristic as this claim was, Descartes did make good on it, publishing the end result of that night in the oven in the 1640s with a slew of philosophical and metaphysical treatises, which were hailed in his beloved Netherlands as ‘heretical’ and ‘contrary to orthodox theology’ and ‘get out of our goddamn town Descartes.’

Eventually, Descartes found refuge with Christina, Queen of Sweden, who was a fan of his ideas about science and love. She invited him to her court with the promises of setting up a new scientific academy and tutoring her personally. It seemed too good to be true. It was. In 1649, in the middle of winter, Descartes moved to Queen Christina’s cold, draughty Swedish castle and discovered that he couldn’t fucking stand his new boss or home. Worst of all for the philosopher who lived his entire life by the principle of never once waking up before noon, Christina declared that she could only be tutored at five in the morning, a demand that Descartes responded to as any night owl would: by saying ‘I would literally rather die’ and promptly proving his point by literally dying just a few months later. In his final act, the man famous for telling the world ‘I think, therefore I am’ had posed an equally unknowable philosophical conclusion: he would no longer think, and therefore he no longer existed.

Perhaps the final irony in the tale is that, as heretical as cogito ergo sum was considered at the time, with its previously unthinkably radical concept of doubting everything, even that which seems self-evident – modern philosophers have actually critiqued Descartes as not going far enough. Thinkers such as Kierkegaard have blasted Descartes for presupposing that ‘I’ exists at all, and Nietzsche for presupposing that ‘thinking’ exists.

I guess the moral of Descartes’s story, if there is one, is probably this: you can’t please all of the people all of the time – especially if they’re philosophers. So, honestly? Why not just smoke a bunch of weed and crawl into an oven?

* Some modern scientists have suggested that Descartes’s night in the oven may in fact be the earliest recorded experience of Exploding Head Syndrome, a sleep disorder you may well have had yourself once or twice. Despite the gnarly name, it doesn’t actually involve your head exploding – that would certainly have made Descartes’s future work more impressive – but it does cause you to hear loud bangs and crashes that aren’t really there, and sometimes see flashes of light as well, both of which Descartes recorded experiencing that night.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-edisons-ghosts-katie-spalding-little-brown-and-company-143030441.html?src=rss

Swiss researchers use a wireless BCI to help a spinal injury patient walk more naturally

Ever year, more than a million people in North America suffer some form of spinal cord injury (SCI), with an annual cost of more than $7 billion to treat and rehabilitate those patients. The medical community has made incredible gains toward mitigating, if not reversing, the effects of paralysis in the last quarter-century including advances in pharmacology, stem cell technologies, neuromodulation, and external prosthetics. Electrical stimulation of the spinal cord has already shown especially promising results in helping spinal injury patients rehabilitate, improving not just extremity function but spasticity, bladder and blood pressure control as well. Now, in a study published in Nature Tuesday, SCI therapy startup Onward Medical, announced that it has helped improve a formerly-paraplegic man’s walking gait through the use of an implanted brain computer interface (BCI) and novel “digital bridge” that spans the gap where the spine was severed.

We’ve been zapping paraplegic patients’ spines with low-voltage jolts as part of their physical rehabilitation for years in a process known as Functional Electrical Stimulation (FES). Electrodes are placed directly over the nerves they’re intended to incite – externally bypassing their own disrupted neural pathways – and, when activated, cause the nerves underneath to fire and their muscles contract. Researchers have used this method to restore hand and arm motion in some patients, the ability to stand and walk in others and, for a lucky few, exosuits! The resulting limb motions however were decidedly ungraceful, resulting in ponderous arm movements and walking gaits that more resembled shuffles.

Onward’s earlier research into epidural electrical stimulation showed that it was effective at targeting nerves in the lower back that could be used to trigger leg muscles. But the therapy at that time was hampered by the need for wearable motion sensors, and by, “the participants … limited ability to adapt leg movements to changing terrain and volitional demands.“ Onward addressed that issue in Tuesday’s study by incorporating a “digital bridge” to monitor the brain’s command impulses and deliver them, wirelessly and in real-time, to a stimulation pack implanted in the patient’s lower back.

Clinicians have employed these systems for the better part of a decade to assist in improving upper extremity control and function following SCI – Onward’s own ARC EX system is designed to do just that – though this study was the first to apply the same theories to the lower extremities.

Onward’s patient was a 38-year-old man who had suffered an “incomplete cervical (C5/C6) spinal cord injury” a decade before and who had undergone a five-month neurorehabilitation program with “targeted epidural electrical stimulation of the spinal cord” in 2017. “This program enabled him to regain the ability to step with the help of a front-wheel walker,” the research team noted in the Nature study. “Despite continued use of the stimulation at home, for approximately three years, he had reached a neurological recovery plateau.”

In addition to the EX, Onward Medical has also developed an internally mounted electrostimulation therapy, the ARC IM. Per the company, it is”purpose-built for placement along the spinal cord to stimulate the dorsal roots,” to help improve SCI patients’ blood pressure regulation. The system used in Tuesday’s study used the ARC IM as a base and married it to a WIMAGINE brain computer interface.

Onward Medical

The Onward team had to first install the BCI inside the patient's skull. Technically, it was a pair of 64-lead electrode implants, each mounted in a 50-milimenter circular-shaped titanium case that sits flush with the skull. The WIMAGINE “is less invasive than other options while offering sufficient resolution to drive walking,” Dave Marver, OnwardMedical CEO, told Engadget via email. “It also has five-year data that demonstrates stability in the clarity of signals produced.”

Two external antennas sit on the scalp, the first providing power to the implants via inductive coupling, the second to shunt the signal to a portable base station for decoding and processing. The processed signal is then beamed wirelessly to the ACTIVA RC implantable pulse generator sitting atop the patient’s lumbar region where 16 more implanted electrodes shock the appropriate nerve clusters to move their legs. Together they form a Brain Spine Interface (BSI) system, per Onward.

The entire setup is designed to be used independently by the patient. The assistive walker houses all the BSI bits and pieces while a tactile feedback interface helps them correctly position the headset and calibrate the predictive algorithm.

In order to get the BCI and pulse generator to work together seamlessly, Onward leveraged a “Aksenova/Markov-switching multilinear algorithm that linked ECoG signals to the control of epidural electrical stimulation parameters,” which seems so obvious in hindsight. Basically, this algorithm predicts two things: the probability that the patient intends to move a specific joint based on the signals it’s monitoring, and both the amplitude and direction of that presumed intended movement. Those predictions are then dumped into an analog controller which translates them into code commands that are, in turn, cycled to the pulse generator every 300 milliseconds. In all, the latency between the patient thinking, “I should walk over there,” and the system decoding those thoughts is just 1.1 seconds.

Calibrating the system to the patient proved an equally quick process. The patient had figured out how to properly “activate” the muscles in their hips to generate enough torque to swing their legs within the first two minutes of trying — and did it with 97 percent accuracy. Over the course of the rehabilitation, the patient managed to achieve control over the movements of each joint in their leg (hip, knee and ankle) with an average accuracy (in that the BSI did what the patient intended) of around 75 percent.

“After only 5 min of calibration, the BSI supported continuous control over the activity of hip flexor muscles,” the team continued, “which enabled the participant to achieve a fivefold increase in muscle activity compared to attempts without the BSI” Unfortunately, those gains were wiped away as soon as the BCI was turned off, instantly losing the ability to step, they explained. “Walking resumed as soon as the BSI was turned back on.”

It wasn’t just that the patient was able to graduate from walking with a front-wheeled frame walker to crutches thanks to this procedure – their walking gait improved significantly as well. “Compared to stimulation alone, the BSI enabled walking with gait features that were markedly closer to those quantified in healthy individuals,” the Onward team wrote. The patient was even able to use the system to cross unpaved terrain while on their crutches, a feat that still routinely proves hazardous for many bipedal robots.

In all, the patient underwent 40 rehab sessions with the BCI – a mix of standard physio-rehab along with BCI-enabled balance, walking and movement exercises. The patient saw moderate gains in their sensory (light touch) scores but a whopping 10-point increase in their WISCI II scores. WISCI II is the Walking Index for Spinal Cord Injury, a 21-point scale measuring a patient’s ambulatory capacity ranging from 20, “can move zero assistance,” down to 0, “bed ridden.“ Onward’s patient went from a 6 to a 16 with the help of this therapy.

“As the participant had previously reached a plateau of recovery after intensive rehabilitation using spinal cord stimulation alone, it is reasonable to assume that the BSI triggered a reorganization of neuronal pathways that was responsible for the additional neurological recovery,” the Onward team wrote. “These results suggest that establishing a continuous link between the brain and spinal cord promotes the reorganization of residual neuronal pathways that link these two regions under normal physiological.”

While the results are promising, much work has yet to be done. The Onward team argues that future iterations will require “miniaturization of the base station, computing unit and unnoticeable antennas,” faster data throughputs, “versatile stimulation parameters, direct wireless control from the wearable computing unit,” and “single low-power integrated circuit embedding a neuromorphic processor with self-calibration capability that autonomously translates cortical activity into updates of stimulation programs.”

Despite the daunting technical challenges, “the BCI system described in Tuesday’s Nature publication may reach the market in five to seven years,” Marvel predicted. ”It is possible and realistic that a BCI-augmented spinal cord stimulation therapy will be on the market by the end of the decade.”

This article originally appeared on Engadget at https://www.engadget.com/swiss-researchers-help-a-spinal-injury-patient-to-walk-more-naturally-using-a-wireless-bci-151542965.html?src=rss