Posts with «author_name|andrew tarantola» label

Kia's EV6 is the new benchmark for affordable electric cars

We got our first good look at the EV6 last March and, nearly a year later, finally got to sit in it, drive it, and push every button in the cabin last week during a day-long press event in Northern California. It’s the first Kia vehicle to be produced under the company’s new Plan S electrification strategy and is expected to be joined by nearly a dozen other new EV models by 2026 - with Kia noting that “All dedicated Kia EVs will begin with the ‘EV’ prefix, followed by a number that indicates the car’s size and position in the lineup, not its chronological place in the launch cadence.”

Hyundai Motor Group

And that’s just vehicles built on the Hyundai Group’s (which owns Kia) E-GMP battery-propulsion platform. When the EV6 arrives in all 50 states later this spring, it’ll be going up against the likes of the Ford Mustang Mach-E, the Volkswagen ID.4, the Tesla Model Y, the Ioniq 5 and Nissan’s Ariya — not to mention Kia’s own Niro EV and its brother from a Hyundai mother, the Kona EV — also probably the Toyota bZ4X and Subaru Solterra when they eventually arrive as well.

The EV6 will be made available in three trim levels: Light, Wind, and GT-Line. Technically there’s a fourth version, the First Edition, but the 1,500 units in that introductory lot sold out in something like 11 hours so your chances of catching one for sale at the local dealership are quite low.

Hyundai Motor Group

The EV6 Light is Kia’s introductory trim level, retailing for $40,900 and offering performance to match. Its 58 kWh nickel-cobalt-manganese battery powers a 168k W rear motor to produce 167 horsepower. That translates into an 8-second 0-60 with an electronically limited 115 MPH top speed and an EPA-rated range of 232-miles. In terms of efficiency, the Light will net you around 136 eMPG in the city (thanks, regenerative braking!) and 100 eMPG at freeway speeds. Like its better-appointed brethren, the Light employs MacPherson struts up front and a multi-link suspension in the rear.

Its drivetrain, unfortunately, can only handle a 400V charging architecture which lengthens the amount it takes to fully recharge it. It’s not terrible, mind you, with a full charge off a 50W DC fast charger taking just over an hour — and a cool 18 minutes if you’re lucky enough to snag a 350W station. At home, using a 240V / 48A connection (ie a home-charging box), you’re looking at just under 6 hours for a full charge but with a standard 110V / 12A socket (like what you plug your coffee maker into), that’s going to take days. Literally it’d have to sit on charge for more than a weekend — 51 hours and 5 minutes specifically, according to Kia’s numbers — to max out its battery capacity. You’re not going to see the same delays with either the Wind or GT’s on account of them using the same 800V-style drivetrains that we’re starting to see on higher-end EVs like GM’s Hummer EV, the Porsche Taycan, Audi’s E-Tron and even the Mach-E.

Hyundai Motor Group

What’s more, the Wind (starting at $47,000) and GT (starting at $51,200 and topping out at $55,900) both offer larger 77.4 kWh packs as well as the option of having both front and rear motors, enabling AWD. You’re looking at 310 miles of range with a 7.2 second 0-60 and 117 MPH top speed with the RWD iterations; 274 miles of range and 5.1 second 0-60 for the AWDs. The AWD notches 134 eMPG in cities and 101 eMPG on freeways, though the AWD’s efficiency takes a noticeable hit, 116 eMPG and 94 eMPG, respectively.

In terms of charging, the Wind and GT will require 73 minutes for a full charge on a 50W DC connection (and again, 18 minutes with a 350W port which provides roughly 217 miles of added range), about 7 hours on a 240V plug and a whopping 68 hours using 110V. They’ll also offer another first for Kia, V2L (vehicle to load) capabilities similar to the Ford F-150 Lightning meaning that you’ll be able to use the EV6 as a giant, rolling battery to power various accessories, 110V power tools and sundry household items in the event of a blackout.

Hyundai Motor Group

Aside from the trim levels and powertrain differences, the various EV6s are practically identical from the outside given the common E-GMP underpinnings. Each measures 114 inches at the wheelbase (same as the Telluride SUV) with an overall length of ~184 inches. The crossover is 74 inches wide and 60.8 inches tall. The EV6 may look like a svelte sports coupe from its promotional photos but in real life, this is one chonky boi — not quite as tall as the Mach-E but just as broad and sporting beefy 19-inch rims (dubs are optional on the GT). It really fills out a standard parking space, though Kia is offering a cool valet feature (optional on Wind, standard on the GT) with the EV6 that allows you to line up the vehicle with a parking space, get out of the car and then use the key fob to remotely back it into the spot.

The EV6 has a damn comfortable interior. Its cabin is disconcertingly quiet with the doors closed and windows up. There’s a total of 102 cubic feet of space inside the EV6, 24.4 of which is dedicated to storage in the rear cargo area (50.2 cubic feet if you fold the seats down). You’ve got plenty of head and leg space regardless of whether you’re sitting in the front or back, though you might need to slouch a bit to fit three sets of shoulders across the rear bench seat. On the plus side, there is no central drive shaft running under the cabin (thanks, e-motors!) so there’s no hump to endure if you’re sitting in the middle.

Hyundai Motor Group

Kia also sprinkled USB and USB-C ports throughout the front and rear seating areas so you won’t have to stretch very far to plug in. Heck there’s even a wireless charging pad on the front armrest (next to the engine start button and drive selector). My only bugaboo with the seating layout was a minor one: the front seats employ a rather elaborate headrest that tends to obscure the forward facing view for people in the rear of the vehicle and, conversely, block out a noticeable portion of the rearview mirror.

Blind spots are not really a worry, however, seeing how many cameras Kia managed to pack into the vehicle. For example, when you engage your turn signal, a live rear-facing video feed from the side mirror pops up on the driver’s instrument cluster so you don’t cut off bicyclists or merge into the path of a tractor-trailer. You’ve also got a slew of 21 different ADAS (driver assist) features including rearview cameras for parking, lane keeping assist, lane departure warnings, automatic high beams, and forward collision avoidance.

Hyundai Motor Group

I was especially impressed with the EV6’s level 2 highway autonomy driving feature, Highway Driving Assist 2. Just click the appropriate button on the steering wheel and the adaptive cruise control will automatically center the vehicle in the lane, maintaining its course and speed even through turns. There were a handful of times when the system and I (and the car in the next lane over) mildly disagreed when a turn in the road either began or finished but as long as I kept my hands on the wheel, minor course corrections were no big deal.

If anything, the reduced need to keep my eyes on the road allowed me sufficient time to figure out how to work the rather confusing central infotainment system. The EV6 comes equipped with a 12.3-inch color TFT touchscreen navigation display unit mounted into the center console. It offers AM/FM/Sirius radio running through a Meridian sound system, Bluetooth connectivity, a WiFi hotspot, and Android Auto/Apple Carplay — ugh, the phone has to to be physically tethered to enable Carplay/Auto? Really? This is what we’re doing in 2022?

Hyundai Motor Group

I’m a fan of the physical volume and temperature control knobs that Kia incorporated into the design, not so much a fan of the lower, secondary touchscreen which alternates between a quick selection bar for the media, navigation, and climate menus. The problem is that the button space that flips functionality between the menu select screen and the dedicated climate control menu is not well defined or really delineated in any meaningful way (I honestly thought it was the button for the hazard lights until a Kia PR rep showed me otherwise) so unless you either know what you’re specifically looking for or tap it at random, there’s no direct way to change the cabin temperature, adjust the fan speeds or activate the defogger — or, conversely, quickly access the navigation map or radio. And asking the onboard virtual assistant for help in doing so was like talking to an (even more) incompetent Siri; there was no amount of enunciation that could get this thing to understand the words that were coming out of my mouth.

Hyundai Motor Group

There was one feature that really stood out to me, easily redeeming the secondary touchscreen’s learning curve, and that was the AR display. It is absolutely brilliant. I gushed about Kia’s use of AR back in 2019 when I drove the Niro EV. That one seemed more a proof-of-concept with its little pop up screen mounted on the steering shaft. The EV6’s, instead, is a far more finished and polished product beamed directly onto the front windshield with startling clarity. The vehicle’s speed, the road’s speed limit, the status of various cruise control features, and upcoming turns all appear to be floating about a car length ahead of you. It’s a fantastic, streamlined alternative to the, in my opinion, overly busy layout of the driver’s cluster. The information can be a bit tricky to read when wearing sunglasses (especially the polarized variety) but other than that, the display is easily understandable regardless of how bright or dark it is outside and can be adjusted to account for the driver’s height and viewing preferences.

Of course all these technological bells and whistles would be rendered moot if it handled like the decrepit Elantra I usually drive. Thankfully, the EV6 does not. It isn’t as overtly aggressive as the Mach-E, nor is it quite as nimble through turns as the Polestar 2 — it certainly isn’t near as pretentious as the Model Y — and the EV6 doesn’t have to be. Kia, from what I gathered from the company’s pre-drive presentation, is positioning the EV6 to be a Gen Z family sedan, a Taurus SHO for millennials, and for that I applaud them. Cranking through hairpins on the 175 and opening up the throttle along quiet stretches of the 101 were fun and all but this car is not built for racing — it’s not going to suck the fillings out of your teeth when you floor the accelerator, you’re not going to be taking street bikes on the inside through turns in it. What the EV6 will do is help ferry your anklebiters to soccer practice before you run errands around town for the afternoon — maybe even take the family out glamping on the weekend — and do it in comfort, style and safety.

Hitting the Books: The decades-long fight to bring live television to deaf audiences

The Silent Era of cinema was perhaps its most equitable with both hearing and hearing-impaired viewers able to enjoy productions alongside one another, but with the advent of "talkies," deaf and hard-of-hearing American's found themselves largely excluded from this new dominant entertainment medium. It wouldn't be until the second half of the 20th century that advances in technology enabled captioned content to be broadcast directly into homes around the country. In his latest book, Turn on the Words! Deaf Audiences, Captions, and the Long Struggle for Access, Professor Emeritus, National Technical Institute for the Deaf at Rochester Institute of Technology, Harry G. Lang, documents the efforts of accessibility pioneers over the course of more than a century to bring closed captioning to the American people.

Gallaudet University Press

From Turn on the Words! Deaf Audiences, Captions, and the Long Struggle for Access by Harry G. Lang. Copyright © 2021 by Gallaudet University. Excerpted by permission.


The Battle for Captioned Television

To the millions of deaf and hard of hearing people in the United States, television before captioning had been “nothing more than a series of meaningless pictures.” In 1979, Tom Harrington, a twenty-eight-year old hard of hearing audiovisual librarian from Hyattsville, Maryland, explained that deaf and hard of hearing people “would like to watch the same stuff as everyone is watching, no matter how good or how lousy. In other words, to be treated like everyone else.”

On March 16, 1980, closed captioning officially began on ABC, NBC, and PBS. The first closed captioned television series included The ABC Sunday Night Movie, The Wonderful World of Disney, and Masterpiece Theater. In addition, more than three decades after the movement to make movies accessible to deaf people began, ABC officially opened a new era by airing its first closed captioned TV movie, Force 10 from Navarone.

By the end of March 1980, sixteen captioned hours of programming were going out over the airwaves each week, and by the end of May, Sears had sold 18,000 of the decoding units within four months of offering them for sale. Sears gave NCI an $8 royalty for each decoding device sold. The funds were used to defray the costs of captioning. In addition to building up a supply of captioned TV programs during its first year of operation, so that a sufficient volume would be available for broadcast, NCI concentrated on training caption editors. A second production center was established in Los Angeles and a third in New York City.

John Koskinen, chairman of NCI’s board, reflected on the challenges the organization faced at this time. A much smaller market for the decoders was evident than that estimated through early surveys. As with the telephone modem that was simultaneously developing, the captioning decoders cost a significant sum for most deaf consumers in those days, and the expense of a decoder did not buy a lot because not all the captioned hours being broadcast were of interest to many people. Although the goal was to sell 100,000 decoders per year, NCI struggled to sell 10,000, and this presented a financial burden.

To help pay for the captioning costs, NCI also set up a “Caption Club” to raise money from organizations serving deaf people and from other private sources. By December 1983, $15,000 was taken in and used to pay for subtitles on programs that otherwise would not be captioned. By 1985, there were 3,500 members promoting the sales.

Interestingly, when sales suddenly went up one year, NCI investigated and found that the Korean owner of an electronics store in Los Angeles was selling decoders as a way to enhance English learning.

The next big breakthrough was the move toward the use of digital devices recently adopted by court recorders that, for NCI, allowed the captioning of live television. Having the ability to watch the evening news and sporting events with captions made the purchase of a decoder more attractive, as did the decline in its price over time.

When the American television network NBC showed the twelve hour series Shogun in 1980, thousands of deaf people were able to enjoy it. The $20 million series was closed captioned and 30,000 owners of the special decoder sets received the dialogue.

Jeffrey Krauss of the FCC admitted that deaf people had not had full access to television from the very beginning: “But by early 1980 it should be possible for the deaf and [hard of hearing] to enjoy many of the same programs we do via a new system called ‘closed captioning.’” Sigmond Epstein, a deaf printer from Annandale, Virginia, felt that “there is more than a 100 percent increase in understanding.” And Lynn Ballard, a twenty-five-year-old deaf student from Chatham, New Jersey, believed that closed captioning would “improve the English language skills and increase the vocabulary of deaf children.” Newspaper reports proliferated, describing the newfound joy among deaf people in gaining access to the common television. Educators recognized the technological advance as a huge leap forward. “I consider closed captioning the single most important breakthrough to give the deaf access to this vital medium,” said Edward C. Merrill Jr., president of Gallaudet College, adding presciently, “Its usage will expand beyond the hearing-impaired.” And an ex-cop cried when his deaf wife wept for joy at understanding Barney Miller. He wrote a letter to the TV networks, cosigned by their six small children, to tell of the new world of entertainment and learning now open to his wife.

3-2-1 Contact was among the first group of television programs, and the first children’s program, to be captioned in March 1980. This science education show produced by Children’s Television Workshop aired on PBS member stations for eight years. Later that same year, Sesame Street became the second children’s program to be captioned and became the longest running captioned children’s program. — “NCI Recap’d,” National Captioning Institute

The enthusiasm continued to spread swiftly among deaf people. Alan Hurwitz, then associate dean for Educational Support Services at NTID, and his family were all excited about the captioning of primetime television programs. Hurwitz, who would eventually be president of Gallaudet University, was, like everyone else at this time, hooked on the new closed captioning technology. One of his favorite programs in 1981 was Dynasty, which was shown weekly on Wednesday night at 9 p.m. He flew to Washington, DC, early one Wednesday morning to meet with congressional staff members in different offices all day long. Not having a videotape recorder, he made sure he had scheduled a flight back home in time to watch Dynasty. After the meetings he arrived at the airport on time only to find out that the plane was overbooked and he was bumped off and scheduled for a flight the next morning. He panicked and argued with the airline clerk that he had to be home that night, and stressed that he couldn’t miss the flight. He was put on a waiting list and there were several folks ahead of him. Then, when he learned that he would definitely miss the flight, he went back to the clerk and insisted that he get on the plane. He explained that he had no way to contact his wife and was concerned about his family. Finally, the clerk went inside the plane and asked if anyone would like to get off and get a reward for an additional flight at no cost. One passenger volunteered to get off and Hurwitz was allowed to take his seat. The plane left a bit late and arrived in Rochester barely in time for him to run to his car in the parking lot and drive home to watch Dynasty!

And even with the positive response from many consumers, it was reported in 1981 that the Sears TeleCaption decoders were not selling well. It was a catch-22 situation. “People hesitate to buy because more programs aren’t captioned; more programs aren’t captioned because not that large an audience has adapters.” Increasing one would clearly increase the other. The question was whether to wait for “the other” to happen. To do so would most likely endanger a considerable federal investment as well as the continued existence of the system. Some theorized that the major factors for the poor sale of decoders were the depressed state of the economy, the lack of a captioned prime-time national news program (which deaf and hard of hearing people cited as a top priority), insufficient numbers of closed captioned programs, and an unrealistic expectation by some purchasers that decoder prices would decrease in spite of the fact that the retailer markup was slightly above the actual production cost.

Captioning a TV Program: A Continuing Challenge

On average, it took twenty-five to forty hours to caption a one-hour program. First, the script was typed verbatim, including every utterance such as “uh,” stuttering, and so forth. Asterisks were inserted to indicated changes in speakers. Next, the time and place of the wording was checked in the program. The transcript was examined for accuracy, noting when the audio starts and stops, and then it was necessary to decide whether the captions should be placed on the left, right, or center of the screen. In 1981, NCI’s goal was to provide no more than 120 to 140 reading words per minute for adult programs and sixty to ninety for children’s programs.

“We have to give time for looking at the picture,” Linda Carson, manager of standards and training at NCI, explained. “A lot of TV audio goes up to 250 or 300 words per minute. That’s tough for caption writers. If the time lapse for a 15-word sentence is 4 ½ seconds, then the captioner checks the rate computation chart and finds out she’s got to do it in nine words.”

Carl Jensema, NCI’s director of research, who lost his hearing at the age of nine, explained that at the start of kindergarten, hearing children have about 5,000 words in their speaking vocabulary, whereas many deaf children are lucky to have fifty. Consequently, deaf children had very little vocabulary for the school to build on. Jensema believed that closed captioning might be the biggest breakthrough for deaf people since the hearing aid. He was certain that a high degree of exposure to spoken language through captioned television was the key to enhanced language skills in deaf people.

CBS Resists

Although ABC, PBS, and NBC were involved in collaborating with NCI to bring captions to deaf audiences, the system CBS supported, teletext, was developed in the United Kingdom and was at least three years away from implementation. “It seems to me that CBS, by not going along with the other networks, might be working in derogation of helping the deaf or the hearing-impaired to get this service at an earlier date—and I don’t like it.” FCC commissioner Joseph Fogarty told Gene Mater, assistant to the president of the CBS Broadcast Group. Despite the success of line 21 captioning, CBS’s Mater believed the teletext system was “so much better” and the existing system was “antiquated.” “I think what’s unfortunate is that the leadership of the hearing-impaired community has not seen fit to support teletext. Those people who have seen teletext recognize it as a communications revolution for the deaf.” In contrast, NCI’s Jeff Hutchins summarized that the World System Teletext presented various disadvantages. It could not provide real-time captioning, “at least not in the way we have seen it . . .” Also, it could not work with home videotape. He believed that even if World System Teletext were adopted by the networks and other program suppliers, the technology would not be an answer for the needs of the American Deaf community. He also explained that “too many services now enjoyed by decoder owners would be lost.”

CBS even petitioned the FCC in July 1980 for a national teletext broadcasting standard. Following this, the Los Angeles CBS affiliate announced plans to test teletext in April 1981. “CBS was so opposed to line 21 that even when advertisers captioned their commercials at no charge to CBS,” Karen Peltz Strauss wrote, “the network allegedly promised to strip the captions off before airing the ads.”

CBS continued its refusal to join the closed captioning program, largely because of its own research into the teletext system and because the comparatively low number of adapters purchased. The NAD accused CBS of failing to cooperate with deaf television viewers by refusing to caption its TV programs.

The NAD planned nationwide protests shortly after this. Hundreds of captioning activists gathered at studios around the country. In Cedar Rapids, one young child carried a sign that read, “Please caption for my Mom and Dad.” Gertie Galloway was one of the disappointed deaf consumers. “CBS has not cooperated with the deaf community,” she stated. “We feel we have a right to access to TV programs.” She was one of an estimated 300 to 400 people carrying signs, who marched in front of the CBS studio in Washington and who were asking supporters to refuse to watch CBS for the day. Similar demonstrations were held in New York, where there were 500 people picketing, and the association said that protests had been scheduled in the more than 200 communities where CBS had affiliates.

Harold Kinkade, the Iowa Association of the Deaf vice president, said, “I don’t think deaf people are going to give up on this one. We always fight for our rights to be equal with the people with hearing.”

The drama increased in August 1982 when it was announced that NBC was dropping captions due to decreased demand. It was two years after NBC had become a charter subscriber. John Ball, president of NCI, said, “There is no question that this hurts. This was a major revenue source for NCI. I think the next six months or so are going to be crucial for us.”

Captioning advocates included representatives from NTID, the National Fraternal Society of the Deaf, Gallaudet, and NAD. Karen Peltz Strauss tells the story of Phil Bravin, chair of a newly established NAD TV Access Committee, who represented the Deaf community in a meeting with NBC executives. Although the NBC meeting was successful, CBS was still resisting and Bravin persisted. As Strauss summarized, “After one particularly frustrating three-hour meeting with the CBS President of Affiliate Relations Tony Malara, Bravin left, promising to ‘see you on the streets of America.’”

In 1984, CBS finally gave in, and the network dual encoded its television programs with both teletext and line 21 captions. The issue with NBC also resolved, and by 1987 the network was paying a third of the cost of the prime-time closed captioning. The rest was covered by such sources as independent producers and NCI, with funds from the US Department of Education used for captioning on CBS and ABC as well. 

In his book Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television, Gregory J. Downey summarized that because the film industry was unwilling to perform same-language subtitling for its domestic audience, the focus of deaf and hard of hearing persons’ “educational and activist efforts toward media justice through subtitling in the 1970s and 1980s had decisively moved away from the high culture of film and instead toward the mass market of television.”

Meanwhile, teachers and media specialists in schools for deaf children across the United States were reporting that their students voluntarily watched captioned TV shows recorded on videocassettes over and over again. These youngsters were engaged in reading, with its many dimensions and functions. In the opinion of some educators, television was indeed helping children learn to read.

People at NCI looked forward to spin-offs from their efforts. They liked to point out that experiments on behalf of deaf people produced the telephone and that the search for a military code to be read in the dark led to braille. Closed captioning should be no different in that regard. The technology also showed promise for instructing hearing children in language skills. Fairfax County public schools in Virginia, authorized a pilot project to study the effectiveness of captioned television as a source of reading material. The study explored the use of closed captioned television in elementary classrooms, evaluated teacher and student acceptance of captioning as an aid to teaching reading, and served as a guide to possible future expansion of activities in this area. Instead of considering television as part of the problem in children’s declining reading and comprehension skills, Fairfax County wanted to make it part of the solution. Promising results were found in this study as well as in other NCI-funded studies with hearing children, and when NCI’s John Ball submitted his budget request to Congress for fiscal year 1987 he was citing “at least 1,500,000 learning disabled children” as a potential audience for captioning and the market for decoder purchases.

In a personal tribute to Carl Jensema, Jeff Hutchins wrote that the only aspect of NCI that really made it an “institute” was the work Carl did to research many different aspects of captioning, including its readability and efficacy among consumers. His work led to a revision of techniques, which made captioning more effective. Once Carl left NCI and the research department was shut down, NCI was not really an “institute” any longer. John Ball also believed in the importance of Jensema’s research at NCI. His studies clearly demonstrated the impact of captioning on NCI’s important audience.

Real-Time Captioning

As early as 1978, the captioning program began to fund developmental work in real-time captioning with the objective of making it possible to caption live programs, such as news, sports, the Academy Awards, and space shuttle launches. This developmental work, however, did not result in the system finally being used. The Central Intelligence Agency (CIA) was exploring a system that would allow the spoken word to appear in printed text. As it turned out, a private concern resulted from the CIA project, Stenocomp, which marketed computer translations to court reporters. The Stenocomp system relied on a mainframe computer and was thus too cumbersome. However, when Stenocomp went out of business, a new firm developed—Translation Systems, Inc. (TSI) in Rockville, Maryland. Advances in computer technology made it possible to install the Stenocomp software into a minicomputer. This made it possible for the NCI to begin real-time captioning using a modified stenotype machine linked to a computer via a cable.

On December 20, 1982, the Ninety-Seventh Congress passed a joint resolution authorizing President Ronald Reagan to proclaim December as “National Close-Captioned Television Month.” The proclamation was in recognition of the NCI service that made television programs meaningful and understandable for deaf and hard of hearing people in the United States.

By 1982, NCI was applying real-time captioning to a variety of televised events, including newscasts, sports events, and other live broadcasts, bringing deaf households into national conversations. The information, with correct punctuation, was brought to viewers through the work of stenographers trained as captioners typing at speeds of up to 250 words per minute. Real-time captioning was used in the Supreme Court to allow a deaf attorney, Michael Chatoff, to understand the justices and other attorneys.

However, fidelity was not the case for many years on television, and problems existed with real-time captioning. In real-time captioning, an individual typed the message into an electric stenotype machine, similar to those used in courtrooms, and the message included some shorthand. A computer translated the words into captions, which were then projected on the screen. Because “this captioning occurred ‘live’ and relies on a vocabulary stored in the software of the computer, misspellings and errors* could and did occur during transcriptions.”

Over the years, many have worked toward error reduction in realtime captioning. As the Hearing Loss Association of America has summarized, “Although real-time captioning strives to reach 98 percent accuracy, the audience will see errors. The caption writer may mishear a word, hear an unfamiliar word, or have an error in the software dictionary. In addition, transmission problems can create technical errors that are not under the control of the caption writer.”

At times, captioners work in teams, similar to some sign language interpreters, and provide quick corrections. This was the approach the pioneer Martin Block used during the Academy Awards in April 1982. Block typed the captions while a team of assistants provided him with correct spellings of the award nominees.

There has also been a growing body of educational research supporting the benefits of captions. As one example, E. Ross Stuckless referred to the concept of real-time caption technology in the early 1980s as the “computerized near-instant conversion of spoken English into readable print.” He also described the possibility of using real-time captioning in the classroom. Michael S. Stinson, another former colleague of mine and also a deaf research faculty member at NTID at RIT, was involved with Stuckless in the first implementation and evaluation of real-time captioning as an access service in the classroom. Stinson subsequently obtained numerous grants to develop C-Print access through real-time captioning at NTID, where hundreds of deaf and hard of hearing students have benefited in this postsecondary program. C-Print also was found successful in K–12 programs.

Communication Access Real-Time Translation (CART) is another service provided in a variety of educational environments, including small groups, conventions, and remote transmissions to thousands of participants viewing through streaming text. Displays include computers, projection screens, monitors, or mobile devices, or the text may be included on the same screen as a PowerPoint presentation.

Special approaches have been used in educational environments. For example, at NTID, where C-Print was developed by Stinson, the scripts of the classroom presentations and communication between professors and students are printed out, and errors are corrected and given to the students to study.

In October 1984, ABC’s World News This Morning became the first daytime television program to be broadcast to viewers with decoders through real-time captioning technology. Within a few weeks, the ABC’s Good Morning America was broadcast with captions as well. “This is a major milestone in the evolution of the closed-captioned television service,” John E. D. Ball declared, describing it as a “valued medium” to deaf and hard of hearing viewers. Don Thieme, a spokesman for NCI, explained that the Department of Education had provided The Caption Center with a $5.3 million contract. These two programs joined ABC’s evening news program World News Tonight and the magazine show 20/20 as the only regularly scheduled news and public affairs available for deaf viewers. The captioned news programs would be phased in gradually during the summer and early fall. Real-time captioning was also provided for the presidential political debates around this time. More than sixty-five home video movies had also been captioned for deaf people. This was an important step toward providing more access to entertainment movies for deaf consumers.

The first time the Super Bowl was aired with closed captions was on January 20, 1985. In September 1985, ABC’s Monday Night Football became the first sports series to include real-time captioning of commentary. ABC, its affiliates, the US Department of Education, advertisers, corporations, program producers, and NCI’s Caption Club helped to fund this program. Using stenotype machines, speed typists in Falls Church, Virginia, listened to the telecast and produced the captions at about 250 words per minute and they appeared on the screen in about four seconds. Each word was not typed separately. Instead, the captioner stroked the words out phonetically in a type of shorthand. Then a computer translated the strokes back into the printed word. These words were sent through phone lines to the ABC control room in New York City, where they were added to the network signal and transmitted across the country. Darlene Leasure, who was responsible for football, described one of the challenges she encountered: “When I was programming my computer at the beginning of the season, I found thirteen Darrels with seven different spellings in the NFL. It’s tough keeping all those Darrels straight.”

As TV shows with closed captions grew in popularity, deaf people were attracted away from the captioned film showings at social clubs or other such gatherings. The groups continued to hold their meetings, but for most gatherings the showing of captioned films gradually stopped. At the same time, telecommunications advances had brought telephone access to deaf people and there was less need for face-to-face “live” communication. Together, the visual telecommunications and captioned television technologies profoundly impacted the way deaf people interacted.

'Death Stranding Director's Cut' arrives March 30th on PC

Two and a half years after Death Stranding, the genre-bending action adventure (and spiritual successor to Paperboy) from acclaimed director Hideo Kojima, hit Playstations and PCs the world over, the most definitive version of the game so far has a firm release date for PC. Death Stranding Director's Cut will arrive via Steam March 30th, 2022.

We are pleased to announce that DEATH STRANDING DIRECTOR’S CUT will be coming to PC!
This will launch simultaneously on Steam and the Epic Games store in Spring 2022.#DeathStrandingDC#KojimaProductions#505Gamespic.twitter.com/HNyS7aLheH

— KOJIMA PRODUCTIONS (Eng) (@KojiPro2015_EN) January 4, 2022

Kojima Studios had previously announced that the new version would arrive at some point this spring but with Thursday's news, gamers can start getting their delivery muscles limbered in earnest. Being a Director's Cut doesn't just mean that players will be treated to even longer cutscenes. Existing PS5 players already have access to new weapons, missions, boss battles, and a racing mode while newly-minted PC gamers will get those bonuses as well as be able to leverage Intel's Xe Super Sampling for improved graphics and performance.

Tesla kept its record 2021 profits rolling right through Q4

Following a profitable — and, ahem, notable — 2021, Tesla remains at the forefront of EV production in America as we enter the new year. With deliveries up nearly 90 percent over 2020’s figures, Tesla achieved “the highest quarterly operating margin among all volume OEMs,” during that time frame, according to the company’s Q4 figures released Wednesday The company not only hit $5.5 billion in net income despite a $6.5 billion outlay for new production facilities in Berlin and Austin, Texas, it also exceeded its own revenue goals by a cool billion dollars.

In Q4, 2021, Tesla produced 930,000 electric vehicles (99 percent of which were Xs and Ys) and delivered 936,000 of them to customers around the world. At the same time, the company expanded its proprietary Supercharger network by a third, now totalling 3,476 stations.

Despite widespread supply chain issues impacting the entire automotive industry, Tesla maintained its production capabilities better than virtually any other automaker. The Fremont factory churned out around 600,000 vehicles last year with plans to increase that figure even after the Austin and Berlin plants come online later this year. Production in the Shanghai plant continues to ramp up as well. According to Tesla, it has managed to lower the per unit cost of producing its vehicles to around $36,000 (and did so in both Q3 and Q4, 2021).

Tesla's Q4 investor call happens at 5:30pm ET today, stay tuned for live updates and comment from Tesla executives.

Developing...

Hitting the Books: What autonomous vehicles mean for tomorrow's workforce

In the face of daily pandemic-induced upheavals, the notion of "business as usual" can often seem a quaint and distant notion to today's workforce. But even before we all got stuck in never-ending Zoom meetings, the logistics and transportation sectors (like much of America's economy) were already subtly shifting in the face of continuing advances in robotics, machine learning and autonomous navigation technologies. 

In their new book, The Work of the Future: Building Better Jobs in an Age of Intelligent Machines, an interdisciplinary team of MIT researchers (leveraging insights gleaned from MIT's multi-year Task Force on the Work of the Future) exam the disconnect between improvements in technology and the benefits derived by workers from those advancements. It's not that America is rife with "low-skill workers" as New York's new mayor seems to believe, but rather that the nation is saturated with low-wage, low-quality positions — positions which are excluded from the ever-increasing perks and paychecks enjoyed by knowledge workers. The excerpt below examines the impact vehicular automation will have on rank and file employees, rather than the Musks of the world.

MIT Press

Excerpted from The Work of the Future: Building Better Jobs in an Age of Intelligent Machines by David Autor, David A. Mindell and Elisabeth B. Reynolds. Reprinted with permission from the MIT PRESS. Copyright 2022.


THE ROBOTS YOU CAN SEE: DRIVERLESS CARS, WAREHOUSING AND DISTRIBUTION, AND MANUFACTURING

Few sectors better illustrate the promises and fears of robotics than autonomous cars and trucks. Autonomous vehicles (AVs) are essentially highspeed wheeled industrial robots powered by cutting-edge technologies of perception, machine learning, decision-making, regulation, and user interfaces. Their cultural and symbolic resonance has brought AVs to the forefront of excited press coverage about new technology and has sparked large investments of capital, making a potentially “driverless” future a focal point for hopes and fears of a new era of automation.

The ability to transport goods and people across the landscape under computer control embodies a dream of twenty-first-century technology, and also the potential for massive social change and displacement. In a driverless future, accidents and fatalities could drop significantly. The time that people waste stuck in traffic could be recovered for work or leisure. Urban landscapes might change, requiring less parking and improving safety and efficiency for all. New models for the distribution of goods and services promise a world where people and objects move effortlessly through the physical world, much as bits move effortlessly through the internet.

As recently as a decade ago, it was common to dismiss the notion of driverless cars coming to roads in any form. Federally supported university research in robotics and autonomy had evolved for two generations and had just begun to yield advances in military robotics. Yet today, virtually every carmaker in the world, plus many startups, have engaged to redefine mobility. The implications for job disruption are massive. The auto industry itself accounts for just over 5 percent of all private sector jobs, according to one estimate. Millions more work as drivers and in the web of companies that service and maintain these vehicles.

Task Force members John J. Leonard and David A. Mindell have both participated in the development of these technologies and, with graduate student Erik L. Stayton, have studied their implications. Their research suggests that the grand visions of automation in mobility will not be fully realized in the space of a few years.15 The variability and complexity of real-world driving conditions require the ability to adapt to unexpected situations that current technologies have not yet mastered. The recent tragedies and scandals surrounding the death of 346 people in two Boeing 737 MAX crashes stemming from flawed software and the accidents involving self-driving car-testing programs on public roads have increased public and regulatory scrutiny, adding caution about how quickly these technologies will be widely dispersed. The software in driverless cars remains more complex and less deterministic than that in airliners; we still lack technology and techniques to certify it as safe. Some even argue that solving for generalized autonomous driving is tantamount to solving for AGI.

Analysis of the best available data suggests that the reshaping of mobility around autonomy will take more than a decade and will proceed in phases, beginning with systems limited to specific geographies such as urban or campus shuttles (such as the recent product announcement from Zoox, an American AV company). Trucking and delivery are also likely use cases for early adoption, and several leading developers are focusing on these applications both in a fully autonomous mode and as augmented, “convoy” systems led by human drivers. In late 2020, in a telling shift for the industry from “robotaxis” to logistics, Uber sold its driverless car unit, having spent billions of dollars with few results. The unit was bought by Amazon-backed Aurora to focus the technology on trucking. More automated systems will eventually spread as technological barriers are overcome, but current fears about a rapid elimination of driving jobs are not supported.

AVs, whether cars, trucks, or buses, combine the industrial heritage of Detroit and the millennial optimism and disruption of Silicon Valley with a DARPA-inspired military vision of unmanned weapons. Truck drivers, bus drivers, taxi drivers, auto mechanics, and insurance adjusters are but a few of the workers expected to be displaced or complemented. This transformation will come in conjunction with a shift toward full electric technology, which would also eliminate some jobs while creating others. Electric cars require fewer parts than conventional cars, for instance, and the shift to electric vehicles will reduce work supplying motors, transmissions, fuel injection systems, pollution control systems, and the like. This change too will create new demands, such as for large scale battery production (that said, the power-hungry sensors and computing of AVs will at least partially offset the efficiency gains of electric cars). AVs may well emerge as part of an evolving mobility ecosystem as a variety of innovations, including connected cars, new mobility business models, and innovations in urban transit, converge to reshape how we move people and goods from place to place.

TRANSPORTATION JOBS IN A DRIVERLESS WORLD

The narrative on AVs suggests the replacement of human drivers by AI-based software systems, themselves created by a few PhD computer scientists in a lab. This is, however, a simplistic reading of the technological transition currently under way, as MIT researchers discovered through their work in Detroit. It is true that AV development organizations tend to have a higher share of workers with advanced degrees compared to the traditional auto industry. Even so, implementation of AV systems requires efforts at all levels, from automation supervision by safety drivers to remote managing and dispatching to customer service and maintenance roles on the ground.

Take, for instance, a current job description for “site supervisor” at a major AV developer. The job responsibilities entail overseeing a team of safety drivers focused in particular on customer satisfaction and reporting feedback on mechanical and vehicle-related issues. The job offers a mid-range salary with benefits, does not require a two- or four-year degree, but does require at least one year of leadership experience and communication skills. Similarly, despite the highly sophisticated machine learning and computer vision algorithms, AV systems rely on technicians routinely calibrating and cleaning various sensors both on the vehicle and in the built environment. The job description for field autonomy technician to maintain AV systems provides a mid-range salary, does not require a four-year degree, and generally requires only background knowledge of vehicle repair and electronics. Some responsibilities are necessary for implementation — including inventorying and budgeting repair parts and hands-on physical work—but not engineering.

The scaling up of AV systems, when it happens, will create many more such jobs, and others devoted to ensuring safety and reliability. Simultaneously, an AV future will require explicit strategies to enable workers displaced from traditional driving roles to transition to secure employment.

A rapid emergence of AVs would be highly disruptive for workers since the US has more than three million commercial vehicle drivers. These drivers are often people with high school or lower education or immigrants with language barriers. Leonard, Mindell, and Stayton conclude that a slower adoption timeline will ease the impact on workers, enabling current drivers to retire and younger workers to get trained to fill newly created roles, such as monitoring mobile fleets. Again, realistic adoption timelines provide opportunities for shaping technology, adoption, and policy. A 2018 report by Task Force Research Advisory Board member Susan Helper and colleagues discusses a range of plausible scenarios and found the employment impact of AVs to be proportional to the time to widespread adoption. Immediate, sudden automation of the fleet would, of course, put millions out of work, whereas a thirty-year adoption timeline could be accommodated by retirements and generational change.

Meanwhile, car-and-truck makers already make vehicles that augment rather than replace drivers. These products include high-powered cruise control and warning systems frequently found on vehicles sold today. At some level, replacement-type driverless cars will be competing with augmentation-type computer-assisted human drivers. In aviation, this competition went on for decades before unmanned aircraft found their niches, while human-piloted aircraft became highly augmented by automation. When they did arrive, unmanned aircraft such as the US Air Force’s Predator and Reaper vehicles required many more people to operate than traditional aircraft and offered completely novel capabilities, such as persistent, twenty-four-hour surveillance.

Based on the current state of knowledge, we estimate a slow shift toward systems that require no driver, even in trucking, one of the easier use cases, with limited use by 2030. Overall shifts in other modes, including passenger cars, are likely to be no faster.

Even when it’s achieved, a future of AVs will not be jobless. New business models, potentially entirely new industrial sectors, will be spurred by the technology. New roles and specialties will appear in expert, technical fields of engineering of AV systems and vehicle information technologies. Automation supervision or safety driver roles will be critical for levels of automation that will come before fully automated driving. Remote management or dispatcher, roles will bring drivers into control rooms and require new skills of interacting with automation. New customer service, field support technician, and maintenance roles will also appear. Perhaps most important, creative use of the technology will enable new businesses and services that are difficult to imagine today. When passenger cars displaced equestrian travel and the myriad occupations that supported it in the 1920s, the roadside motel and fast-food industries rose up to serve the “motoring public.” How will changes in mobility, for example, enable and shape changes in distribution and consumption?

Equally important are the implications of new technologies for how people get to work. As with other new technologies, introducing expensive new autonomous cars into existing mobility ecosystems will just perpetuate existing inequalities of access and opportunity if institutions that support workers don’t evolve as well. In a sweeping study of work, inequality, and transit in the Detroit region, Task Force researchers noted that most workers building Model T and Model A Fords on the early assembly lines traveled to work on streetcars, using Detroit’s then highly developed system. In the century since, particularly in Detroit, but also in cities all across the country, public transit has been an essential service for many workers, but it has also been an instrument facilitating institutional racism, urban flight to job-rich suburbs, and inequality. Public discourse and political decisions favoring highway construction often denigrated and undermined mass transit, with racial undertones. As a result, Black people and other minorities are much more likely to lack access to personal vehicles.

“Technology alone cannot remedy the mobility constraints” that workers face, the study concludes, “and will perpetuate existing inequities absent institutional change.” As with other technologies, deploying new technologies in old systems of transportation will exacerbate their inequalities by “shifting attention toward what is new and away from what is useful, practical, and needed.” Innovating in institutions is as important as innovating in machines; recent decades have seen encouraging pilot programs, but more must be done to scale those pilots to broader use and ensure accountability to the communities they intend to serve. “Transportation offers a unique site of political possibility.”

German Bionic's connected exoskeleton helps workers lift smarter

We’re still quite a ways away from wielding proper Power Loaders but advances in exosuit technology are rapidly changing how people perform physical tasks in their daily lives — some designed to help rehabilitate spinal injury patients, others created to improve a Marine’s warfighting capabilities, and many built simply to make physically repetitive vocations less stressful for the people performing them. But German Bionic claims only one of them is intelligent enough to learn from its users’ mistaken movements: its 5th-generation Cray X.

The Cray X fits on workers like a 7kg backpack with hip-mounted actuators that move carbon fiber linkages strapped to the upper legs, allowing a person to easily lift and walk with up to 30kg (66 lbs) with both their legs and backs fully supported. Though it doesn’t actively assist the person’s shoulders and arms with the task, the Cray X does offer a Smart Safety Companion system to help mitigate common lifting injuries.

“It's a real time software application that runs in the background and can warn the worker when the ergonomic risk is getting too high,” Norma Steller, German Bionic’s Head of IoT, told Engadget. “For example, recommending a break because we know that… the repetition and the overall stress can lead to fatigue, and fatigue can lead to injuries. This is something we want to prevent.”

The SSC not only collects granular telemetry information — what load is being lifted, ergonomic risks such as twisting while lifting, and potential environmental factors — it uses a machine learning algorithm to analyze that data to adapt the exoskeleton to the worker wearing it via OTA software updates. Not only is this data displayed to the workers themselves on an attached monitor, the Cray X also transmits that data up the supervisory chain allowing managers to monitor the movements of their employees to ensure that they are not overexerting themselves.

“Since we are collecting every single step and every single lift, the data that we provide is much more accurate,” Steller noted. The data the Cray collects is gathered from real-world use, not lab tests or supervised trials where workers are on their best ergonomic behavior. “Especially in logistics, every single step, every single lift, every single trend is usually planned. But sometimes in the real world, not every plan comes to fulfillment and then we suddenly see workplace performance drop very, very quickly. And with the data we provide, you can actually do an investigation and figure out why [that drop off is occurring].”

Steller sees the Cray X as a "preventative device" designed to ensure workers don't overextend or overexert themselves. “We are a preventative device, so we are preventing injury,” Steller added. “We're not considered a medical [device manufacturer]. We consider ourselves an exoskeleton for industrial use.” As such, the Cray X is IP54 rated for dust and moisture so it can work in all but the dingiest of warehouse environments.

And though the Cray X is designed to be put on and taken off in under a minute, it can be worn for up to a full work shift without being removed thanks to the 5th generation’s new hot-swappable 40V battery system.

“We implemented the hot swapping function so that you can just drop it on the spot without having to turn off the device,” Mauris Kiss, Head of Mechanical R&D at German Bionic, told Engadget. “You can pull out the [spent battery] for a new one, place the old one on the charger — we use the Makita fast charging stations which charge the battery in like 30 to 40 minutes — and then you can just move on. You could potentially work like eight hours without having to take off the exoskeleton.”

For as useful as the current generation of exoskeletal technologies are today, the German Bionic team sees them becoming even more capable, and widespread, in the years ahead. “My feeling is that we will see much more specialized exoskeletons in the future because the technology is more available.” Steller said. “I think they will enter our world, not only in the B2B industrial sectors. We will see them basically everywhere because we have the chance to augment our body and usually humans take the chance to do that. We will see them everywhere, without any real limitation but very specialized to the use case.”

“I really see everyone on the street wearing an exoskeleton in one form or another,” Eric Eitel, German Bionic’s Head of Communications, added. “But I think that the exoskeletons that we are looking for in the future are the active ones. I see them being a lot slimmer, smarter and connected.”

And even as the technology expands to consumer uses, Eitel believes exoskeletons will likely remain a common sight in industrial settings. “There are still a lot of workspaces that cannot be automated and I think that's going to stay like that for a long time. You still have to rely on people so we don't want to replace all the humans. I really see that technology is going alongside [automation].”

“We see robots more as companions, our product is actually a companion,” added Kiss. “I think this can be just another possibility, I mean, there's still situations where automation still makes a lot of sense. When you go into dangerous environments, you should actually automate that. But why should we automate everything?”

NBA games in 4K are coming to YouTube TV

The view from your couch will look a little more like sitting courtside in the days to come, as Streamable reports on Thursday that YouTube TV will begin offering select NBA matchups in 4K. 

The only, ahem, hoop viewers will need to get through in order to watch is having a YouTube TV subscription with the 4K Plus add-on. YTTV on its own is $65 a month, the 4K add-on will set you back an additional $12/mo for the first year before nearly doubling, up to $20/month thereafter. Not every game will be made available in the high definition format though Saturday's game between the Cavs and Thunder will.

Senator Klobuchar's major tech reform bill advances out of committee

A major tech reform bill that would prevent the industry's biggest players — Apple, Amazon, Google, and their ilk — from discriminating against smaller businesses that rely on the big platforms' services is one step closer to passage on Thursday after passing from committee on a bipartisan 16-6 vote. Senators Mike Lee, John Cornyn, Ben Sasse, Tom Cotton, Thom Tillis, and Marsha Blackburn all voted against it.

The American Innovation and Choice Online Act, which was sponsored by Senator Amy Klobuchar, would prohibit Amazon from promoting its own Amazon Basics gear over similar products in search results. Similarly, Apple and Google would be barred from pushing their in-house apps over those from third-party developers in their respective app stores. The bill passed out of both the antitrust subcommittee and the primary judiciary committee with the support of that vote and will now be put forth on the Senate floor.   

Unsurprisingly, the platforms impacted by these proposed regulations are none too pleased with the recent proceedings. Apple's Tim Cook has reportedly been personally lobbying against the bill while Amazon has released the following statement:

There’s a reason why small businesses who sell on Amazon are asking Congress to take a look at the “collateral damage” that will fall on them and their customers, should the American Innovation and Choice Online Act become law. This bill is being rushed through the legislative process without any acknowledgment by its authors of its unintended consequences. As drafted, the bill’s vague prohibitions and unreasonable financial penalties—up to 15% of U.S. revenue, not income—would jeopardize our ability to allow small businesses to sell on Amazon. The bill would also make it difficult for us to guarantee one or two-day shipping for those small businesses' products—key benefits of Amazon Prime for sellers and customers alike. The bill’s authors are targeting common retail practices and, troublingly, appear to single out Amazon while giving preferential treatment to other large retailers that engage in the same practices. We urge the Senate Judiciary Committee to reject Senator Klobuchar and Senator Grassley’s bill and refuse to rush through an ambiguously worded bill with significant unintended consequences.

A similar bill has already passed the judiciary committee's counterpart in the House though the President has not yet weighed in regarding his support of these proposals.

Why airlines and telecoms are fighting over the 5G rollout

Rollouts of new wireless technologies and standards have not always gone well. When the GSM system debuted, it caused hearing aids to buzz and pop with static while early cell phone signals would occasionally disrupt pacemakers. Today, as carriers expand their 5G networks across the country, they are faced with an equally dangerous prospect: that one of 5G’s spectrum bands may interfere with the radio altimeters aboard commercial aircraft below 2,500 feet, potentially causing their automated landing controls to misjudge the distance from the ground and crash.

Sticking the landing is generally considered one of the more important parts of a flight — which is, in part, why you never hear people applaud during takeoff. As such, the FAA, which regulates American air travel, and the FCC, which controls the use of our telecommunications spectrum, have found themselves at loggerheads over how, when and where 5G might be safely deployed.

5G is shorthand for 5th generation, referring to the latest standard for cellular service. First deployed in 2019, 5G operates on the same basis as its 4G predecessor — accessing the internet and telephone network via radio waves beamed at local cell antennas — but does so at broadband speeds up to 10Gb/s. However, because 5G can operate on the C band spectrum, there’s a chance that it can interfere with radio altimeters if within close proximity to airports, especially the older models lacking sufficient RF shielding.

“The fundamental emissions may lead to blocking interference in the radar altimeter receiver,” a 2020 study by aeronautics technical group RTCA, observed. “The spurious emissions, on the other hand, fall within the normal receive bandwidth of the radar altimeter, and may produce undesirable effects such as desensitization due to reduced signal-to-interference-plus-noise ratio (SINR), or false altitude determination due to the erroneous detection of the interference signal as a radar return.”

So when the FCC sold a range of C band in the 3.7 GHz to 3.98 GHz frequency range last February for a cool $81 billion, the airline industry under the umbrella of Airlines for America (which represents American Airlines, Delta, FedEx and UPS) took umbrage. These concerns prompted the FAA to issue a warning about the issue last November and led Verizon/AT&T to push back their plans to launch 5G service on C Band by a month.

This warning, in turn, prompted the CTIA (the wireless industry’s main lobbying arm) to file its counterargument shortly thereafter, asserting that aircraft already safely fly into and out of more than 40 countries that have broadly deployed 5G networks, such as Denmark and Japan. “If interference were possible, we would have seen it long before now,” CTIA President, Meredith Attwell Baker, insisted in a November Morning Consult op-ed.

However, those countries have also taken steps necessary to mitigate much of the potential issues, such as lowering the power of 5G cell towers, moving towers or simply pointing their receivers away from landing approaches.

FAA

What’s more, a causal relationship between the 5G rollout and misbehaving altimeters has yet to be established.

"The C-band is closer to the frequencies used by airplane altimeters than previous 5G deployments," Avi Greengart, lead analyst at Techsponentia, told Tom’s Guide. "In the US, the 5G we’ve been using has either been used before for prior wireless networks, or it is on really high frequencies with no ability to penetrate a piece of paper, let alone an airplane."

"There is a 200MHz buffer zone between C-band and altimeter frequencies, and the part of C-band that is opening up this week is even farther from that point,” he continued. “Additionally, similar frequencies are already in use in Europe with no problems observed. If the airplane’s altimeter filters are working properly, there should be no interference whatsoever."

Despite the CTIA’s efforts, the FAA (along with Transportation Secretary Pete Buttigieg) in late December requested Verizon and AT&T delay their primary rollout by two weeks, starting on January 5th and extending to January 17th, to give the government time to further investigate the issue. Unsurprisingly, those complex issues were not resolved within the given time frame, causing the airline industry to look towards the supposedly falling heavens and Chicken Little even harder.

In a letter obtained by Reuters, Airlines for America argued the skies would be beset by utter “chaos” amid “catastrophic” failures if 5G were deployed, potentially stranding thousands of passengers overseas. "Unless our major hubs are cleared to fly, the vast majority of the traveling and shipping public will essentially be grounded. This means that on a day like yesterday, more than 1,100 flights and 100,000 passengers would be subjected to cancellations, diversions or delays."

Full airline CEO letter https://t.co/NeXVJbFhzQpic.twitter.com/ws5Y5HKx1X

— davidshepardson (@davidshepardson) January 17, 2022

"We are writing with urgency to request that 5G be implemented everywhere in the country except within the approximate two miles of airport runways as defined by the FAA on January 19, 2022," the airline CEOs leaders argued. "To be blunt, the nation's commerce will grind to a halt." The airlines also objected to potential incurred costs related to better shielding their avionics (which helped alleviate the previous issues with hearing aids).

For its part, United Airlines told Reuters that it faces "significant restrictions on 787s, 777s, 737s and regional aircraft in major cities like Houston, Newark, Los Angeles, San Francisco and Chicago." That’s about 4 percent of the carrier’s daily traffic. These restrictions would apply to cargo aircraft as well as passenger planes, which will likely further exacerbate the nation’s current supply chain woes.

The FAA has conceded that 5G cellular technology could potentially cause issues but stopped short of the airline industry’s apocalyptic predictions. “Aircraft with untested altimeters or that need retrofitting or replacement will be unable to perform low-visibility landings where 5G is deployed,” the agency said in a statement, directing airlines that operate Boeing 787s, for example, to take extra precautions when landing on wet or snowy runways as 5G interference could prevent the massive airfcraft’s thrust reversers to fail, leaving it to stop using brake power alone.

AT&T is none too happy with the FAA’s course of action either. "We are frustrated by the FAA's inability to do what nearly 40 countries have done, which is to safely deploy 5G technology without disrupting aviation services, and we urge it to do so in a timely manner," an AT&T spokesperson said in a statement.

The FAA is already considering the airlines’ request for buffer zones and, on January 8th, released a list of 50 airports across the country where it plans to implement them. The agency also notes that it has cleared five models of radio altimeter to operate within low-visibility areas where 5G systems operate. These models are installed in more than 60 percent of aircraft flying in the US including the Boeing 737 - 777, Airbus’ A310 - A380, and the MD-10/-11.

"We recognize the economic importance of expanding 5G, and we appreciate the wireless companies working with us to protect the flying public and the country’s supply chain. The complex U.S. airspace leads the world in safety because of our high standards for aviation, and we will maintain this commitment as wireless companies deploy 5G," Transportation Secretary Pete Buttigieg, said in a statement on Tuesday.

This leaves the FAA in a tight spot. With the two week delay having already expired, Verizon is moving ahead with its 1,700-city, 100 million-customer rollout. AT&T is doing so as well, though on a more limited basis in select parts of eight metro areas including Detroit, Chicago, Austin, Dallas-Fort Worth and Houston. The agency has pledged to continue to investigate the issue and regulate based on its findings though it has not yet disclosed what steps it plans to take next for doing so.

That time France tried to make decimal time a thing

Though Marie Antoinette would be hard-pressed to care, the French Revolution of 1789 set its sights on more than simply toppling the monarchy. Revolutionaries sought to break the nation free from its past, specifically from the clutches of the Catholic church, and point France towards a more glorious and prosperous future. They did so, in part, by radically transforming their measurements of the passage of time.

Throughout the 18th century, most French folks were Catholic as that was the only religion allowed to be openly practiced in the country, and had been since the revocation of the Edict of Nantes in 1685. As such, the nation had traditionally adhered to the 12-month Gregorian calendar — itself based on even older, sexagesimal (6-unit) divisible systems adapted from the Babylonians and Egyptians — while French clocks cycled every 60 minutes and seconds.

But if there was little reason to continue using the established chronology system aside from tradition, the revolutionaries figured, why not transmute it into a more rational, scientifically-backed method, just as the revolution itself sought to bring stability and new order to French society as a whole? And what better system to interpose than that of the decimal, which already governed the nation’s weights and measures. So, while it wasn’t busy abolishing the privileges of the First and Second Estate, eliminating the church’s power to levy taxes or just drowning nonjuring Catholic priests en masse, France’s neophyte post-revolution government set about reforming the realm’s calendars and clocks.

The concept of decimal time, wherein a day is broken down into multiples of 10, was first suggested more than thirty years prior when French mathematician, Jean le Rond d'Alembert, argued in 1754, “It would be very desirable that all divisions, for example of the livre, the sou, the toise, the day, the hour, etc. would be from tens into tens. This division would result in much easier and more convenient calculations and would be very preferable to the arbitrary division of the livre into twenty sous, of the sou into twelve deniers, of the day into twenty-four hours, the hour into sixty minutes, etc.”

By the eve of the Revolution, the idea had evolved into a year split into 12 months of 30 days apiece, their names inspired by crops and the prevailing weather in Paris during their occurrences. That there are 365 days in a year is an immutable fact dictated by the movement of the Earth around our local star. So, 12 months of 30 days apiece resulted in 5 days (6 in a leap year!) left over. These, the revolutionaries reserved for national holidays.

Each week was divided into 10 days, every day was split into 10 equal hours, those were split into 100 minutes, with each minute divided into 100 seconds (roughly 1.5 times longer than conventional minutes) and each second into 1000 “tierces.” Individual tierces could also be divided into 1000 even tinier units, called “quatierces.” The implementation of tierces would also lead to the creation of a new unit of length, called the “half-handbreadth,” which is the distance the twilight zone travels along the equator over the course of one tierce, and equal to one billionth of the planet’s circumference — around 4 centimeters.

Decimal time was formally adopted by National Convention decree in 1793, “The day, from midnight to midnight, is divided into ten parts, each part into ten others, and so forth until the smallest measurable portion of duration.” As such, midnight would be denoted as 00:00 while noon would be 5:00.

Public Domain

At midnight of the autumn equinox on September 22nd of that year, France’s Gregorian calendar ushered in 1st Vendémiaire Year II of the French Republican calendar. From there on, every new year would begin at midnight of the Autumn equinox, as observed by the Paris Observatory.

“The new calendar was based on two principles,” a 2017 exhibition at the International Museum of Watches, Looking for Noon at Five O’Clock, noted. “That the Republican year should coincide with the movement of the planets, and that it should measure time more accurately and more symmetrically by applying the decimal system wherever possible. Non-religious, it advocated a rational approach and honored the seasons and work in the fields.”

The main advantage of a decimal time system is that, since the base used to divide the time is the same as the one used to represent it, the whole time representation can be handled as a single string.

On one hand, this system offered the clear advantage that both the numerical base used to define the time and the numerical base used to divide it are the same number. For example, quick, how many seconds are there in three hours? The answer, most people will Google, is 10,800 — 60 seconds/minute x 60 minutes/hour x 3 hours. In decimal time, you simply get 30,000 — 3 hours x 10,000 seconds/hour.

However, due to an oversight in its otherwise logical design on account of gaps in astronomical knowledge, the Republican calendar struggled to properly accommodate leap years. “The four-year period, after which the addition of a day is usually necessary, is called the Franciade in memory of the revolution which, after four years of effort, led France to republican government, National Convention decreed. “The fourth year of the Franciade is called Sextile.”

Problem is that leap years, if we’re counting new years by midnights on the autumnal equinox in Paris, don’t consistently happen every four years. By equinox measure, the first leap year of the Republican calendar would actually have to occur in year III while the leaps in years XV and XX would happen half a decade apart.

There were also more practical issues with swapping the nation’s chronology over to an entirely new system, like the fact that people already had perfectly good clocks which they’d have to replace, were decimal time to remain in effect. It was also wildly unpopular with the working class who would only receive one day of rest out of 10 using the Republican calendar (plus a half day on the fifth), rather than the existing Gregorian one-day-in-seven, not to mention that the ten-day week played havoc with traditional Sunday religious services, seeing as how Sunday would cease to exist.

Overall, the idea simply failed to capture public support — despite edicts demanding the creation of decimal-based clocks — and was officially suspended on April 7th, 1795. The French then took a quick crack at metric time, which similarly measured time’s passage in factors of ten but based its progression in conventional seconds (aka 1/86400th of a day). Of course all of these efforts were rendered moot when Napoleon declared himself emperor in 1804, made peace with the Vatican and reinstituted the Gregorian calendar, thereby relegating both the Republican calendar and decimal time to the dustbin of history. The lesson here being, unless you’ve TNG’d yourself into a temporal loop, don’t try to fix what isn’t already broken, especially when it might earn you a trip to the guillotine.