There are vanishingly few places in Microsoft's business ecosystem that remain untouched by January's OpenAI deal, with GPT-4 backed chatbot and generative capabilities coming to Office products like Word and Excel, Bing Search, and integrated directly into the Edge browser. During the Microsoft Build 2023 conference on Tuesday, company executives clarified and confirmed that its 365 Copilot AI — the same one going into Office — will be "natively integrated" into the Edge browser.
Microsoft 365 Copilot essentially takes all of your Graph information — data from your Calendar, Word docs, emails and chat logs — and smashes them together, using the informatic slurry in training an array of large language models, to provide AI-backed assistance personalized to your business.
"You can type natural language requests like 'Tell my team how we updated the product strategy today,'" Lindsay Kubasik, Group Product Manager, Edge Enterprise wrote in a Tuesday blog post. "Microsoft 365 Copilot will generate a status update based on the morning’s meetings, emails and chat threads."
By integrating 365 Copilot into the browser itself, users will be able to request additional context even more directly. "As you’re looking at a file your colleague shared, you can simply ask, 'What are the key takeaways from this document?'” and get answers from 365 Copilot in real-time. Even on-page search (ctrl+F) is getting smarter thanks to the deeper integration. The company is also incorporating the same open plugin standard launched by OpenAI, ensuring interoperability between ChatGPT and 365 Copilot products.
But it's not ready for rollout just yet and there's no word on when that will change. "Microsoft 365 Copilot is currently in private preview," a Microsoft rep told Engadget. "Microsoft 365 Copilot will be natively integrated into Microsoft Edge, and we will have more to share at a later date."
On the other hand, Microsoft's digital co-working product, Edge Workspaces, will be moving out of preview altogether in the coming months, Kubasik noted. Workspaces allows teams to share links, project websites and working files as a shared set of secured browser tabs. Furthermore, the company is "evolving" its existing work experience into Microsoft Edge for Business. This will include unique visual elements and cues — which should begin rolling out to users today — along with "enterprise controls, security, and productivity features" designed to help keep remote workers' private lives better separated from their work lives.
The company recognizes the need for "a new browser model that enhances users’ privacy while maintaining crucial, enterprise grade controls set at the organizational level," Kubasik wrote. "Microsoft Edge for Business honors the needs of both end users and IT Pros as the browser that automatically separates work and personal browsing into dedicated browser windows with their own separate caches and storage locations, so information stays separate."
Microsoft Edge for Business enters preview today on managed devices. If your organization isn't already using the Edge ecosystem, fear not, a preview for unmanaged devices is in the works for the coming months.
This article originally appeared on Engadget at https://www.engadget.com/microsoft-confirms-365-copilot-ai-will-be-natively-integrated-into-edge-150007852.html?src=rss
Johnny Cash's Hurt hits way different in A Major, as much so as Ring of Firein G Minor. The dissonance in tone between the chords is, ahem, a minor one: simply the third note lowered to a flat. But that change can fundamentally alter how a song sounds, and what feelings that song conveys. In their new book Every Brain Needs Music: The Neuroscience of Making and Listening to Music, Dr. Larry S Sherman, professor of neuroscience at the Oregon Health and Science University, and Dr. Dennis Plies, a music professor at Warner Pacific University, explore the fascinating interplay between our brains, our instruments, our audiences, and the music they make together.
The Minor Fall and The Major Lift: Sorting Out Minor and Major Chords
Another function within areas of the secondary auditory cortex involves how we perceive different chords. For example, part of the auditory cortex (the superior temporal sulcus) appears to help distinguish major from minor chords.
Remarkably, from there, major and minor chords are processed by different areas of the brain outside the auditory cortex, where they are assigned emotional meaning. For example, in Western music, minor keys are perceived as “serious” or “sad” and major keys are perceived as “bright” or “happy.” This is a remarkable response when you think about it: two or three notes played together for a brief period of time, without any other music, can make us think “that is a sad sound” or “that is a happy sound.” People around the world have this response, although the tones that illicit these emotions differ from one culture to another. In a study of how the brain reacts to consonant chords (notes that sound “good” together, like middle C and the E and G above middle C, as in the opening chord of Billy Joel’s “Piano Man”), subjects were played consonant or dissonant chords (notes that sound “bad”together) in the minor and major keys, and their brains were analyzed using a method called positron emission tomography (PET). This method of measuring brain activity is different from the fMRI studies we discussed earlier. PET scanning, like fMRI, can be used to monitor blood flow in the brain as a measure of brain activity, but it uses tracer molecules that are injected into the subjects’ bloodstreams. Although the approach is different, many of the caveats we mentioned for fMRI studies also apply to PET studies. Nonetheless, these authors reported that minor chords activated an area of the brain involved in reward and emotion processing (the right striatum), while major chords induced significant activity in an area important for integrating and making sense of sensory information from various parts of the brain (the left middle temporal gyrus). These findings suggest the locations of pathways in the brain that contribute to a sense of happiness or sadness in response to certain stimuli, like music.
Don't Worry, Be Happy (or Sad): How Composers Manipulate our Emotions
Although major and minor chords by themselves can elicit “happy” or “sad” emotions, our emotional response to music that combines major and minor chords with certain tempos, lyrics, and melodies is more complex. For example, the emotional link to simple chords can have a significant and dynamic impact on the sentiments in lyrics. In some of his talks on the neuroscience of music, Larry, working with singer, pianist, and songwriter Naomi LaViolette, demonstrates this point using Leonard Cohen’s widely known and beloved song “Hallelujah.” Larry introduces the song as an example of how music can influence the meaning of lyrics, and then he plays an upbeat ragtime, with mostly major chords, while Naomi sings Cohen’s lyrics. The audience laughs, but it also finds that the lyrics have far less emotional impact than when sung to the original slow-paced music with several minor chords.
Songwriters take advantage of this effect all the time to highlight their lyrics’ emotional meaning. A study of guitar tablatures (a form of writing down music for guitar) examined the relationship between major and minor chords paired with lyrics and what is called emotional valence: In psychology, emotions considered to have a negative valence include anger and fear, while emotions with positive valence include joy. The study found that major chords are associated with higher-valence lyrics, which is consistent with previous studies showing that major chords evoke more positive emotional responses than minor chords. Thus, in Western music, pairing sad words or phrases with minor chords, and happy words or phrases with major chords, is an effective way to manipulate an audience’s feelings. Doing the opposite can, at the very least, muddle the meaning of the words but can also bring complexity and beauty to the message in the music.
Manipulative composers appear to have been around for a long time. Music was an important part of ancient Greek culture. Although today we read works such as Homer’s Iliad and Odyssey, these texts were meant to be sung with instrumental accompaniment. Surviving texts from many works include detailed information about the notes, scales, effects, and instruments to be used, and the meter of each piece can be deduced from the poetry (for example, the dactylic hexameter of Homer and other epic poetry). Armand D’Angour, a professor of classics at Oxford University, has recently recreated the sounds of ancient Greek music using original texts, music notation, and replicated instruments such as the aulos, which consists of two double-reed pipes played simultaneously by a single performer. Professor D’Angour has organized concerts based on some of these texts, reviving music that has not been heard for over 2,500 years. His work reveals that the music then, like now, uses major and minor tones and changes in meter to highlight the lyrics’ emotional intent. Simple changes in tones elicited emotional responses in the brains of ancient Greeks just as they do today, indicating that our recognition of the emotional value of these tones has been part of how our brains respond to music deep into antiquity.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-every-brain-needs-music-sherman-piles-columbia-university-press-143039604.html?src=rss
Seemingly overnight, Sam Bankman-Fried, the founder of FTX, went from cryptocurrency wunderkind to wanted for questioning by the FBI. After years of unfettered success, the walls of SBF's blockchain empire came crumbling down around him as his tricky financial feats failed and his generalized lack of accounting brought increasing scrutiny by regulators. In SBF: How the FTX Bankruptcy Unwound Crypto’s Very Bad Good Guy, veteran crypto reporter Brady Dale provides a scintillating and clarifying narrative of the entire FTX/Alameda Ventures saga. In the excerpt below, we glimpse in at the immediate aftermath of FTX's sudden insolvency.
When I wrote in Chapter 1, “I am drowning in Sam,” I was here, at this point in the story. I was then. I still am, but the tide is going out. I’m not back on land yet, but I know if I rest and I don’t fight it, the land will find me. I don’t need to find the land. Unlike SBF after CoinDesk’s Ian Allison released his post about Alameda’s balance sheet, I can see the shore from where I am.
In late November and early December SBF would not leave the public eye. He was in magazines. He was in the New York Times. He was doing interviews on YouTube. He was on Twitter Spaces.
YouTube gadfly Coffeezilla was chasing him.
NFT influencers were chasing him. TV reporters were chasing him.
A goofy token shill I will not dignify by naming chased him.
Everyone thought if they could just get one more interview from him, it would make sense.
They were all playing into Sam’s hands. Many who felt betrayed believed that his media tour was working to his benefit, that he might actually get away with losing $8 billion (or was it $10 billion?) in customer money. They saw large media companies as complicit in helping to burnish his image.
But then he was arrested, and as I write this, he’s sitting in the sick-bay of an overcrowded prison in the island nation his company had recently made his home.
Looking back on it, there is not a lot of value to say about all these many appearances. We were all just tea bags soaking in the flavors of a collective stew we had boiled up together, a swirling potion of shifting sadness, outrage, intrigue, schadenfreude, and mockery.
SBF appeared in many places, but to my mind, these were the key media appearances:
Axios interview on Nov. 29. A few pieces were published with different parts of the interview. Where he first said he was down to $100,000.
The first recording from Tiffany Fong’s phone call with SBF, released on YouTube Nov. 29.
The New York Times Dealbook Summit, Nov. 30.
Good Morning America, Dec. 1.
New York Magazine interview on its Intelligencer site, Dec. 1.
The Scoop podcast, Dec. 5.
There were others. People really like the grilling scam vigilante Coffeezilla gave him, too. Eventually, though, listening to these things was like watching one of those YouTube videos of skateboarding accidents: it was a lot of the same thing over and over.
He was sorry, there was an accounting artifact, he should have had better risk management, he shouldn’t have given up his company, etc., etc., etc.
Were anyone to go through the above accounts and more from that month in a two-day marathon session like I did, I think they would eventually discern a strategy. What appeared to be a series of open conversations had become, to my ears, talking points.
I wrote the same for Axios at the time, but I don’t actually think the talking points are all that interesting anymore now that he’s been arrested. At the end of December 2022, he would be back in his family home, under house arrest, his passport taken, and wearing an ankle monitor. Once those handcuffs went on, the public relations campaign became irrelevant because it was something designed to prepare himself if his lawyers succeeded in keeping him out of jail.
As I wrote in the beginning, as new facts and circumstances arise, the set of possible explanations and futures shrink. Before the handcuffs, it seemed almost likely he might get away with the company’s failure. Once he went to jail, it’s hard to imagine how we ever even saw that possibility.
Because they failed to keep him out of jail, the talking points matter very little.
Except one point, which I think is worth highlighting.The fact that Alameda was drawing customer funds from FTX to cover losses on investments hasn’t been verified by a court yet, but it has been alleged in multiple accounts by different government organizations who seem to have had a look at the books.
That cash (in cryptocurrency form) had moved from FTX to Alameda to meet margin calls, make loans, make investments, and even to make political donations. This is, in my estimation, considerably more nefarious than the way SBF described the hole’s origins in his media tour.
In all of his appearances, he described Alameda as having an excessive margin position. For example, in New York Mag, he said:
A client on FTX put on a very large margin position. FTX fucked up in allowing that position to be put on and in underestimating, in fact, the size of the position itself.That margin position blew out during the extreme events over the last few weeks. I feel really bad about that. And it was a large fuckup of risk analysis and risk attention and, you know, it was with an account that was given too much trust, and not enough skepticism.
In other words, FTX let Alameda’s bets on FTX get too big.We were to imagine Alameda was, I don’t know, 12X long $500 million on bitcoin and 20X long $200 million in ether or something.
All secured by the ftt token. And ftt went bad, and now they were out a bunch of money.
When FTX first fell apart, I went into Slack and explained my understanding of the whole debacle to one of my coworkers this way:
Step 1.
Launch a trading desk. Make piles.
Step 2.
Decide you want to make more piles, so open an exchange that prints money off retail trades and use that money to lend to trading desk.
Step 3.
Lend retail money to trading desk in hopes of quadrupling all gains.
Step 4.
Trading desk loses borrowed money.
Step 5.
[Surprised face emoji]
But SBF was trying to spin it as if it had all stayed inside the house. It was just big bets, but funds hadn’t left FTX.This is still bad, but more negligent, less outright theft.
Jason Choi had been with Spartan Capital when FTX was raising money, and he’d declined to invest because he didn’t like the Alameda/ FTX relationship. He explained all this on Twitter after the exchange collapsed.We spoke before complaints had been made against SBF, and I asked him whether he thought it mattered if Alameda had an outsized margin position or had taken customer funds out of the exchange.
“I think functionally they are the same,” he said. “It implies that Alameda is able to run things into seriously negative positions.”
In other words, in terms of what people have lost, each outcome arrives at the same place.
But it does matter in terms of how to understand the decisions made. If funds were taken out and handed to Alameda to use elsewhere, people had to green-light those moves, knowing that they were against the terms of service and against the many assurances that the company had made to the public and their users.
It’s not negligent. It’s willful. Legality aside, it just feels different ethically.
However, for what it’s worth, when SBF and I last spoke he stuck by this explanation: the hole in FTX’s balance sheet was from a margin position Alameda took out. It had failed to adequately hedge, and it had gotten much too long on the wrong collateral.
Before he was arrested, that’s how he described the problem. That’s still how he describes it. He agreed, when we spoke, that it would be different if FTX had been sending actual customer assets to Alameda to use in other ways, but he says that wasn’t happening.
The government is claiming that it did happen, and to do so it’s drawing attention to loans made to SBF and other cofounders, loans they used to make venture investments, to buy stock in Robinhood, political donations, and to purchase real estate.
This points to a part of the story that I didn’t really understand until the complaints started coming out.
When it’s said that someone is a “billionaire,” that doesn’t mean that they have billions of dollars in cash. It doesn’t mean, necessarily, that they can even spend that much money.That doesn’t even mean that they can access billions of dollars in cash, or even many millions.
If someone’s billionaire status is tied up in a stake in a private company, it can be very difficult to turn that value into spendable money. If their status is tied up largely in thinly traded, extremely new crypto tokens, it might be even harder.
In the complaints by the SEC and the CFTC and the DoJ, they allege loans from the Samglomerate, using customer funds, to enable investments, property purchases, political donations, and more. All of these things take actual cash. SBF and his cadre had very high net worth, but it hadn’t occurred to me that they wouldn’t really have access to that much cash until those complaints came out.
Of course SBF, Wang, Singh, and others could borrow money somewhere, and maybe more sophisticated readers than me presumed it was borrowed from banks. Or maybe it was borrowed from some of the new crypto lenders (many of which fell into dire straits). But these various agencies allege something else: the funds were borrowed from FTX customers. And the customers didn’t know. Further, they had no upside. Only downside.
And the downside is here now.
“I thought at the time and still do think that, the size of those loans was substantially less than the profit, than like the liquid trading profit that Alameda had made,” he told me in December. In other words, he denies that the loans were made using FTX user funds.
The whole story of what happened is confusing and dripping in finance jargon and involves a level of mathematics few of us have contemplated recently. It may be that SBF’s story here has been a bet that he was smart enough to cast a spell and convince us all that all the mistakes were only made inside the casino.
And if he had done that well enough, the sting of the error might fade, and if he evaded an arrest and conviction, he might be able to rehabilitate himself in the public eye and apply his considerable gifts, once again.
He might still have won, but then he was arrested.
So in that case, these appearances might really have just been about enjoying that last moment in the spotlight. For some, it’s better to be hated than ignored. But it’s also worth noting that he hasn’t given up on this story.
As I wrote in the prologue: he doesn’t believe the evidence of crimes is there. He seems as eager to reopen the books at FTX and Alameda. He wants everyone to get from 20 percent of the story to 80 or 90 percent. And maybe we will. And maybe the fact that he seems to want that as much as anyone will prove to be a sign that he was right.
But trust me, if you haven’t seen the many media appearances of November and December 2022, you don’t need to. This chapter gives more than you need to know about what he had to say before they put him in a Bahamas jail.
Sources Referenced
“Exclusive: Sam Bankman-Fried says he’s down to $100,000,” Shen, Lucinda, Axios, Nov. 29, 2022.
“Sam Bankman-Fried Interviewed Live About the Collapse of FTX,” New York Times Events,YouTube, Nov. 30, 2022.
“FTX founder Sam Bankman-Fried denies ‘improper use’ of customer funds,” Stephanopoulos, George, Good Morning America, Dec. 1, 2022.
“Sam Bankman-Fried’s First Interview After FTX Collapse,” Fong, Tiffany,
YouTube, posted Nov. 29, 2022
“What Does Sam Bankman-Fried Have to Say for Himself? An interview with the disgraced CEO,”Wieczner, Jen, New York Magazine, Dec. 1, 2022.
“2-hour sit-down with Sam Bankman-Fried on the FTX scandal,” Quinton, Davis, and Frank, Chaparro, The Scoop podcast,The Block, Dec. 5, 2022.
Jason Choi, interview, mobile, Dec. 11, 2022.
“The SBF media blitz’s key messages,” Dale, Brady, Axios, Dec. 8, 2022.
Interview, Sam Bankman-Fried, phone call with spokesperson, Dec. 30, 2022.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-sbf-brady-dale-wiley-ftx-143033761.html?src=rss
Google has stood at the forefront at many of the tech industry's AI breakthroughs in recent years, Zoubin Ghahramani, Vice President of Google DeepMind, declared in a blog post while asserting that the company's work in foundation models, are "the bedrock for the industry and the AI-powered products that billions of people use daily." On Wednesday, Ghahramani and other Google executives took the Shoreline Amphitheater stage to show off its latest and greatest large language model, PaLM 2, which now comes in four sizes able to run locally on everything from mobile devices to server farms.
PaLM 2, obviously, is the successor to Google's existing PaLM model that, until recently, powered its experimental Bard AI. "Think of PaLM as a general model that then can be fine tuned to achieve particular tasks," he explained during a reporters call earlier in the week. "For example: health research teams have fine tuned PaLM with with medical knowledge to help answer questions and summarize insights from a variety of dense medical texts." Ghahramani also notes that PaLM was "the first large language model to perform an expert level on the US medical licensing exam."
Bard now runs on PaLM 2, which offers improved multilingual, reasoning, and coding capabilities, according to the company. The language model has been trained far more heavily on multilingual texts than its predecessor, covering more than 100 languages with improved understanding of cultural idioms and turns of phrase.
It is equally adept at generating programming code in Python and JavaScript. The model has also reportedly demonstrated "improved capabilities in logic, common sense reasoning, and mathematics," thanks to extensive training data from "scientific papers and web pages that contain mathematical expressions."
Even more impressive is that Google was able to spin off application-specific versions of the base PaLM system dubbed Gecko, Otter, Bison and Unicorn.
"We built PaLM to to be smaller, faster and more efficient, while increasing its capability," Ghahramani said. "We then distilled this into a family of models in a wide range of sizes so the lightest model can run as an interactive application on mobile devices on the latest Samsung Galaxy." In all, Google is announcing more than two dozen products that will feature PaLM capabilities at Wednesday's I/O event
This is a developing story. Please check back for updates.
This article originally appeared on Engadget at https://www.engadget.com/google-unveils-its-multilingual-code-generating-palm-2-language-model-180805304.html?src=rss
For the past two months, anybody wanting to try out Google's new chatbot AI, Bard, had to first register their interest and join a waitlist before being granted access. On Wednesday, the company announced that those days are over. Bard will immediately be dropping the waitlist requirement as it expands to 180 additional countries and territories. What's more, this expanded Bard will be built atop Google's newest Large Language Model, PaLM 2, making it more capable than ever before.
Google hurriedly released the first generation Bard back in February after OpenAI's ChatGPT came out of nowhere and began eating the industry's collective lunch like Gulliver in a Lilliputian cafeteria. Matters were made worse when Bard's initial performances proved less than impressive — especially given Google's generally-accepted status at the forefront of AI development — which hurt both Google's public image and its bottom line. In the intervening months, the company has worked to further develop PaLM, the language model that essentially powers Bard, allowing it to produce better quality and higher fidelity responses, as well as perform new tasks like generating programming code.
As Google executives announced at I/O on Wednesday, Bard has been switched over to then new PaLM 2 platform. As such, users can expect a bevy of new features and functions to roll out in the coming days and weeks. Features like a higher degree of visual responses to your queries, so when you ask for "must see sights" in New Orleans, you'll be presented with images of the sites you'd see, more than just a bullet list or text-based description. Conversely, users will be able to more easily input images to Bard alongside their written queries, bringing Google Lens capabilities to Bard.
Even as Google mixes and matches AI capabilities amongst its products — 25 new offerings running on PaLM 2 are being announced today alone — the company is looking to ally with other industry leaders to further augment Bard's abilities. Google announced on Wednesday that it is partnering with Adobe to bring its Firefly generative AI to Bard as a means to counter Microsoft's BingChat-DallE2 offering.
Finally, Google shared news that it will be implementing a number of changes and updates in response to feedback received from the community since launch. Clicking on a line of generated code or chatbot answer and Bard will provide a link to that specific bit's source. Additionally, the company is working to add a export ability so users can easily run generated programming code on Replit or toss their generated works into Docs or Gmail. There will even be a new Dark theme.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/google-bard-transitions-to-palm-2-and-expands-to-180-countries-172908926.html?src=rss
Back in March, Adobe announced that it too would be jumping into the generative AI pool alongside the likes of Google, Meta, Microsoft and other tech industry heavyweights with the release of Adobe Firefly, a suite of AI features. Available across Adobe's product lineup including Photoshop, After Effects and Premiere Pro, Firefly is designed to eliminate much of the drudge work associated with modern photo and video editing. On Wednesday, Adobe and Google jointly announced during the 2023 I/O event that both Firefly and the Express graphics suite will soon be incorporated into Bard, allowing users to generate, edit and share AI images directly from the chatbot's command line.
Per a release from the company, users will be able to generate an image with Firefly, then edit and modify it using Adobe Express assets, fonts and templates within the Bard platform directly — even post to social media once it's ready. Those generated images will reportedly be of the same high quality that Firefly beta users are already accustomed to as they are all being created from the same database of Adobe Stock images, openly licensed and public domain content.
Additionally, Google and Adobe will leverage the latter's existing Content Authenticity Initiative to mitigate some of the threats to creators that generative AI poses. This includes a "do not train" list which will preclude a piece of art's inclusion in Firefly's training data as well as persistent tags that will tell future viewers whether or not a work was generated and what model was used to make it. Bard users can expect to see the new features begin rolling out in the coming weeks ahead of a wide-scale release.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/google-is-incorporating-adobes-firefly-ai-image-generator-into-bard-174525371.html?src=rss
With an instantly recognizable hook and effervescent melody, Vanessa Carlton’s debut single A Thousand Miles hit the 2002 Billboard Charts like a neutron bomb, earning nominations for the Grammy Award for Song of the Year and the Billboard Music Award for Top 40 Track of the Year. Featured prominently in 2004’s White Chicks, Terry Crews credits the undeniable smash with helping launch his acting career.
The accompanying music video saw Carlton and her piano rolling through Newbury Park, California, and portions of downtown Los Angeles. Twenty-one years later, a team of hobbyist roboticists have brought Carlton’s music back to the public ear — this time, to the streets of San Francisco with an animatronic performer and remotely deployable disco ball.
The robot, which currently doesn’t have much of a moniker from the team beyond “The Robot,” is the brainchild of San Francisco-based aerospace engineer Ben Howard, electrical engineer Noah Klugman, lawyer Lane Powell (with additional assistance from local puppeteer, Adam Kreutinger). “This is just a thing that we've done together, the three of us, to try to create some joy,” Klugman told Engadget during a recent video call.
The trio first collaborated during the pandemic. “Kids couldn't really trick or treat properly,” Howard explained. “So we put together a kind of spooky Halloween candy dispensing robot that could drive around the streets and any kids who were brave enough could walk up, have a conversation with it and get some candy.” That project inspired them to look into developing a robot with year-round appeal. A “piano playing Muppet seemed like a good thing to do,” he continued, and from that the Thousand-Mile Machine was born.
The team started with an outdated food delivery drone model, obtained from “a friend of a friend,” as the mobile platform on which to build out the rest of the construct. “When companies get rid of these things, if they're cool pieces of hardware, there are plenty of engineers around the city who like to modify them and turn them into fun projects,” Howard explained. “There's a big community of people who are sharing cool hardware around.”
“I came to acquire [the wheeled base] and we wanted to do this music playing robot.” he added. “Then, when you think about piano player that roams around the city, immediately that [Vanessa Carlton] video comes to mind. It's so iconic.”
The nearly 400-pound robot measures roughly five feet long on a side and about four feet tall, narrow enough to fit on a sidewalk and into the TEU container workshop in which it was built at San Francisco’s Box Shop. The wheeled base is controlled remotely and manually, while the puppet’s performance — from the hand and head movements to the big disco ball reveal — are all part of a prerecorded act, akin to Chuck E Cheese’s animatronic Pizza Players band. A single button press is all that’s needed to start the performance.
Vanessa Carlton herself reportedly met the robot during a recent event in Petaluma, “it seemed like she enjoyed it,” Klugman noted. “Everyone we've met in San Francisco has seemed to really love it. I think the response has been overwhelmingly positive.”
“That was very much [the case with] everyone we encountered when we were out filming,” Lane added. “Just really happy to watch it and excited to talk to us about it and just 100 percent positive from all ages and all walks of life all over the city. It was a really cool experience.”
This article originally appeared on Engadget at https://www.engadget.com/a-robot-puppet-rolled-through-san-francisco-singing-vanessa-carlton-hits-170020897.html?src=rss
It is not hard — at all — to trick today’s chatbots into discussing taboo topics, regurgitating bigoted content and spreading misinformation. That’s why AI pioneer Anthropic has imbued its generative AI, Claude, with a mix of 10 secret principles of fairness, which it unveiled in March. In a blog post Tuesday, the company further explained how its Constitutional AI system is designed and how it is intended to operate.
Normally, when an generative AI model is being trained, there’s a human in the loop to provide quality control and feedback on the outputs — like when ChatGPT or Bard asks you rate your conversations with their systems. “For us, this involved having human contractors compare two responses,” the Anthropic team wrote. “from a model and select the one they felt was better according to some principle (for example, choosing the one that was more helpful, or more harmless).”
Problem with this method is that a human also has to be in the loop for the really horrific and disturbing outputs. Nobody needs to see that, even fewer need to be paid $1.50 an hour by Meta to see that. The human advisor method also sucks at scaling, there simply aren’t enough time and resources to do it with people. Which is why Anthropic is doing it with another AI.
Just as Pinocchio had Jiminy Cricket, Luke had Yoda and Jim had Shart, Claude has its Constitution. “At a high level, the constitution guides the model to take on the normative behavior described [therein],” the Anthropic team explained, whether that’s “helping to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is ‘helpful, honest, and harmless.’”
According to Anthropic, this training method can produce Pareto improvements in the AI’s subsequent performance compared to one trained only on human feedback. Essentially, the human in the loop has been replaced by an AI and now everything is reportedly better than ever. “In our tests, our CAI-model responded more appropriately to adversarial inputs while still producing helpful answers and not being evasive,” Anthropic wrote. “The model received no human data on harmlessness, meaning all results on harmlessness came purely from AI supervision.”
The company revealed on Tuesday that its previously undisclosed principles are synthesized from “a range of sources including the UN Declaration of Human Rights, trust and safety best practices, principles proposed by other AI research labs, an effort to capture non-western perspectives, and principles that we discovered work well via our research.”
The company, pointedly getting ahead of the invariable conservative backlash, has emphasized that “our current constitution is neither finalized nor is it likely the best it can be.”
“There have been critiques from many people that AI models are being trained to reflect a specific viewpoint or political ideology, usually one the critic disagrees with,” the team wrote. “From our perspective, our long-term goal isn’t trying to get our systems to represent a specific ideology, but rather to be able to follow a given set of principles.”
This article originally appeared on Engadget at https://www.engadget.com/anthropic-explains-how-its-constitutional-ai-girds-claude-against-adversarial-inputs-160008153.html?src=rss
If the Wu-Tang produced it in '23 instead of '93, they'd have called it D.R.E.A.M. — because data rules everything around me. Where once our society brokered power based on strength of our arms and purse strings, the modern world is driven by data empowering algorithms to sort, silo and sell us out. These black box oracles of imperious and imperceptible decision-making deign who gets home loans, who gets bail, who finds love and who gets their kids taken from them by the state.
In their new book, How Data Happened: A History from the Age of Reason to the Age of Algorithms,which builds off their existing curriculum, Columbia University Professors Chris Wiggins and Matthew L Jones examine how data is curated into actionable information and used to shape everything from our political views and social mores to our military responses and economic activities. In the excerpt below, Wiggins and Jones look at the work of mathematician John McCarthy, the junior Dartmouth professor who single-handedly coined the term "artificial intelligence"... as part of his ploy to secure summer research funding.
A passionate advocate of symbolic approaches, the mathematician John McCarthy is often credited with inventing the term “artificial intelligence,” including by himself: “I invented the term artificial intelligence,” he explained, “when we were trying to get money for a summer study” to aim at “the long term goal of achieving human level intelligence.” The “summer study” in question was titled “The Dartmouth Summer Research Project on Artificial Intelligence,” and the funding requested was from the Rockefeller Foundation. At the time a junior professor of mathematics at Dartmouth, McCarthy was aided in his pitch to Rockefeller by his former mentor Claude Shannon. As McCarthy describes the term’s positioning, “Shannon thought that artificial intelligence was too flashy a term and might attract unfavorable notice.” However, McCarthy wanted to avoid overlap with the existing field of “automata studies” (including “nerve nets” and Turing machines) and took a stand to declare a new field. “So I decided not to fly any false flags anymore.” The ambition was enormous; the 1955 proposal claimed “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” McCarthy ended up with more brain modelers than axiomatic mathematicians of the sort he wanted at the 1956 meeting, which came to be known as the Dartmouth Workshop. The event saw the coming together of diverse, often contradictory efforts to make digital computers perform tasks considered intelligent, yet as historian of artificial intelligence Jonnie Penn argues, the absence of psychological expertise at the workshop meant that the account of intelligence was “informed primarily by a set of specialists working outside the human sciences.” Each participant saw the roots of their enterprise differently. McCarthy reminisced, “anybody who was there was pretty stubborn about pursuing the ideas that he had before he came, nor was there, as far as I could see, any real exchange of ideas.”
Like Turing’s 1950 paper, the 1955 proposal for a summer workshop in artificial intelligence seems in retrospect incredibly prescient. The seven problems that McCarthy, Shannon, and their collaborators proposed to study became major pillars of computer science and the field of artificial intelligence:
“Automatic Computers” (programming languages)
“How Can a Computer be Programmed to Use a Language” (natural language processing)
“Neuron Nets” (neural nets and deep learning)
“Theory of the Size of a Calculation” (computational complexity)
“Self-improvement” (machine learning)
“Abstractions” (feature engineering)
“Randomness and Creativity” (Monte Carlo methods including stochastic learning).
The term “artificial intelligence,” in 1955, was an aspiration rather than a commitment to one method. AI, in this broad sense, involved both discovering what comprises human intelligence by attempting to create machine intelligence as well as a less philosophically fraught effort simply to get computers to perform difficult activities a human might attempt.
Only a few of these aspirations fueled the efforts that, in current usage, became synonymous with artificial intelligence: the idea that machines can learn from data. Among computer scientists, learning from data would be de-emphasized for generations.
Most of the first half century of artificial intelligence focused on combining logic with knowledge hard-coded into machines. Data collected from everyday activities was hardly the focus; it paled in prestige next to logic. In the last five years or so, artificial intelligence and machine learning have begun to be used synonymously; it’s a powerful thought-exercise to remember that it didn’t have to be this way. For the first several decades in the life of artificial intelligence, learning from data seemed to be the wrong approach, a nonscientific approach, used by those who weren’t willing “to just program” the knowledge into the computer. Before data reigned, rules did.
For all their enthusiasm, most participants at the Dartmouth workshop brought few concrete results with them. One group was different. A team from the RAND Corporation, led by Herbert Simon, had brought the goods, in the form of an automated theorem prover. This algorithm could produce proofs of basic arithmetical and logical theorems. But math was just a test case for them. As historian Hunter Heyck has stressed, that group started less from computing or mathematics than from the study of how to understand large bureaucratic organizations and the psychology of the people solving problems within them. For Simon and Newell, human brains and computers were problem solvers of the same genus.
Our position is that the appropriate way to describe a piece of problem-solving behavior is in terms of a program: a specification of what the organism will do under varying environmental circumstances in terms of certain elementary information processes it is capable of performing... Digital computers come into the picture only because they can, by appropriate programming, be induced to execute the same sequences of information processes that humans execute when they are solving problems. Hence, as we shall see, these programs describe both human and machine problem solving at the level of information processes.
Though they provided many of the first major successes in early artificial intelligence, Simon and Newell focused on a practical investigation of the organization of humans. They were interested in human problem-solving that mixed what Jonnie Penn calls a “composite of early twentieth century British symbolic logic and the American administrative logic of a hyper-rationalized organization.” Before adopting the moniker of AI, they positioned their work as the study of “information processing systems” comprising humans and machines alike, that drew on the best understanding of human reasoning of the time.
Simon and his collaborators were deeply involved in debates about the nature of human beings as reasoning animals. Simon later received the Nobel Prize in Economics for his work on the limitations of human rationality. He was concerned, alongside a bevy of postwar intellectuals, with rebutting the notion that human psychology should be understood as animal-like reaction to positive and negative stimuli. Like others, he rejected a behaviorist vision of the human as driven by reflexes, almost automatically, and that learning primarily concerned the accumulation of facts acquired through such experience. Great human capacities, like speaking a natural language or doing advanced mathematics, never could emerge only from experience—they required far more. To focus only on data was to misunderstand human spontaneity and intelligence. This generation of intellectuals, central to the development of cognitive science, stressed abstraction and creativity over the analysis of data, sensory or otherwise. Historian Jamie Cohen-Cole explains, “Learning was not so much a process of acquiring facts about the world as of developing a skill or acquiring proficiency with a conceptual tool that could then be deployed creatively.” This emphasis on the conceptual was central to Simon and Newell’s Logic Theorist program, which didn’t just grind through logical processes, but deployed human-like “heuristics” to accelerate the search for the means to achieve ends. Scholars such as George Pólya investigating how mathematicians solved problems had stressed the creativity involved in using heuristics to solve math problems. So mathematics wasn’t drudgery — it wasn’t like doing lots and lots of long division or of reducing large amounts of data. It was creative activity — and, in the eyes of its makers, a bulwark against totalitarian visions of human beings, whether from the left or the right. (And so, too, was life in a bureaucratic organization — it need not be drudgery in this picture — it could be a place for creativity. Just don’t tell that to its employees.)
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-how-data-happened-wiggins-jones-ww-norton-143036972.html?src=rss
Bipedal locomotion is the worst. You take a step forward and you end up standing there, legs akimbo, until you repeat the process with your other foot. And then do it again, and again, and again — back and forth, left and right, like a putz, until you reach your destination: hopefully, somewhere to sit.
Even worse, walking requires real physical effort. For our distant ancestors migrating into colder climates, slogging through mud and snow and across ice, on foot, quickly ate into their already tight caloric budgets, limiting the distances they could hunt and travel. Sure, the advent of wheels in the fourth millennia BC drastically improved our transportation options but it would be nearly another 6,000 years before we’d think to strap them to our feet.
A 2007 study by a team out of Oxford University and published in Biological Journal of the Linnean Society of London suggests that the practice of ice skating potentially emerged in Finland based on evidence from a set of bone “skates” dated to around 1800 BC. The team argued that the region’s large number of interconnected waterways, which froze over every winter, was the only place in the ancient world cold enough and flat enough to make strapping horse shins to the bottoms of our feet make caloric sense. In fact, the research team found that these skates — even if they were only a quarter as efficient and quick as modern models — offered a ten percent energy savings versus traveling the same route by foot, which added an extra 20 km total distance to travel each day.
“Ice skates were probably the first human powered locomotion tools to take the maximum advantage from the biomechanical properties of the muscular system: even when traveling at relatively high speeds,” the team argued. “The skating movement pattern required muscles to shorten slowly so that they could also develop a considerable amount of force.”
The practice also appears in western China. In March, archaeologists discovered 3,500-year-old skates made from oxen and horse bone in the country's Xinjiang Uyghur Autonomous Region. The dig team, led by archaeologist Ruan Qiurong of the Xinjiang Institute of Cultural Relics and Archeology, argued that their skates bear striking similarities to those found previously in Europe, suggesting a potential knowledge exchange between the two Bronze Age civilizations.
It wouldn’t be until the mid-1700s that the wheeled variety made their first appearance. Those early bespoke prototypes served in London stage shows as props to simulate ice skating winter scenes, though the identity of their creators has been lost to history. 18th century Belgian clock- and instrument-maker, John Joseph "The Ingenious Mechanic" Merlin, is credited with devising the first inline roller skate, a two-wheeled contraption he dubbed “skaites” and unveiled in the 1760s.
"One of his ingenious novelties was a pair of skaites contrived to run on wheels.” Thomas Busby's Concert Room and Orchestra Anecdotes mentioned in 1805, “when not having provided the means of retarding his velocity, or commanding its direction, he impelled himself against a mirror of more than five hundred pounds value, dashed it to atoms, broke his instrument to pieces and wounded himself most severely."
By the middle of the 19th century, roller skating had migrated out of the art house scene and into the public consciousness. London’s first public roller rink, The Strand, opened in 1857 and set off a decades-long love affair with the English populace. As the burgeoning sport grew in popularity, the skates themselves quickly evolved to flatten the learning curve in taking up the activity.
Merlin’s early two-wheel inline design gave way to the classic four-wheel side-by-side (aka “quad”) build we all remember from the Disco Era. (New York City’s James Leonard Plimpton is credited with their invention in 1863.) Not only did Plimpton’s skates offer a more stable rolling platform, they were the first to incorporate “trucks,” the cushioned, pivoting axles that virtually all modern skates and skateboards use.
A dozen years later, someone finally got around to inventing proper wheel bearings. That someone being William Brown of Birmingham, England. He patented the first modern ball bearing design in 1876 and quickly followed that with a larger design for bicycles in 1877. These patents directly led to today’s ball bearing technology which we can find in everything from skateboards to semi-tractor trailers.
On modern skates, the rotating wheel and the stationary axel are separated by two hollow-disc shaped devices called bearings. These bearings are designed so that the inner and outer surfaces, which each sit in contact with the wheel and axle, can spin freely. They can do this because of a ring of small spherical metal balls sandwiched between the two plates, which roll and rotate without generating significant amounts of friction or heat (thanks to inventor Levant M. Richardson who patented the use of harder steel bearings in 1884), allowing the spinning wheels to do the same. The advent of this tech meant we no longer had to smear our axles with animal grease so that in and of itself right there is a win for humanity, saving us from a future where every indoor roller derby meet would smell of cooked pork fat.
With bearings in your wheels, it's far easier to pick up velocity and achieve a higher top speed, so rather than let the public go “full Merlin” at the local rink, the toe stop was invented in 1876. It remains a common fixture on modern quad skates as well as a select number of inlines - though those more commonly rely on heel stops instead. Despite their origins in the 18th century, The Peck & Snyder Company holds the patent for inline skates, two-wheeled ones at that, from 1900.
From the dawn of the 20th century, roller skating has been an intractable part of American culture despite generational swings in the pastime’s popularity. The sport rolled right over from the UK in the early 1900s and experienced an initial surge in popularity until the Great Depression hit in the 1930s.
To keep the interest of an economically stricken public, Chicago-based sports promoter Leo Selzer invented roller derby in 1935. Selzer had originally owned a string of movie theaters in Oregon but got into live event promotions when they became losing propositions during the economic downturn, which coincided with a national endurance competition fad (think marathon dancing and pole sitting competitions).
Derby as we know it today, grew out of Selzer’s early roller marathon idea. His inaugural Transcontinental Roller Derby in 1935 lasted several days and drew a crowd of more than 20,000 spectators. In 1937, Selzer tweaked the competition’s format to allow for more physicality between contestants and modern roller derby was born.
At the tail end of the 1970’s the industry once again reinvented itself with the introduction of inline skates. In 1979, Scott and Brennan Olsen happened upon an old pair of inline prototypes in Minneapolis that had been developed a decade earlier by the Chicago Roller Skate Company. Competitive ice hockey players themselves, the two immediately saw its potential as an off-season training aid. By this point, inline skate designs had been patented for decades. The tech itself was known, but nobody had managed to make the wheeled boots commercially viable — until the brothers Olsen founded the Rollerblade company in 1980.
You’d think that we’d have learned from Icarus but no, in 1999, Roger Adams had the bright idea for Heelys: skates that were actually shoes but with small wheels mounted in the heels. Not to be outdone, Razor debuted the Jetts Heel Wheels — imaging just the back half of a set of quad skates but they’re driven by an electric motor that hits 10 mph for up to 30 minutes — in 2018.
And in 2022, our wheeled footwear aspirations came full circle with the release of Moonwalkers: quad skates that are worn like skates but are powered like Jetts and are designed for stepping, not pushing and gliding. Designed “to make you feel like you’re on a moving walkway,” these devices can reportedly accelerate your strides up to 250 percent and adapt to your gait as you use them.
This article originally appeared on Engadget at https://www.engadget.com/human-historical-fascination-wearable-wheels-rollerskates-transportation-154332171.html?src=rss