Even Staples, the office supply store, can’t resist the lure of podcasts. The retailer is teaming up with a company called Spreaker to build podcast studios at six of its stores in the Boston area.
The studios will be soundproof, have enough space for four people to record, and will sync with Spreaker’s technology so people can get discounted access to its hosting and distribution services. A recording specialist will be on hand to help, too, and a 60-minute session costs $60. Although that fee only covers the actual recording time, Staples will give people discounts on editing services from We Edit Podcasts if they need help.
The studios are part of broader store renovations for what the company calls Staples Connect, which are stores designed to be co-working and community spaces for professionals, teachers, and students. The redesign speaks to the larger retail brand movement of making retail spaces more like community meeting spots. Apple’s former retail chief Angela Ahrendts famously called Apple stores “town squares” in 2017, for instance, and she predicted people would hang out in stores designed around this idea just as much as they would come in to buy something specific.
Target also experimented with a different kind of retail space in San Francisco, one where people could play with gadgets before buying them. Called Open House, the store functioned like a smart home, so people could better understand the technology. All of this is to say that it isn’t surprising to see Staples try to innovate on a traditional retail design. And building a podcast studio in-store does speak to the moment audio is having — it just seems odd to build studios for a trend that might eventually die.
Boeing has discovered another software problem on the beleaguered 737 Max that will have to be fixed before the airplane returns to the skies, Bloomberg reported on Thursday. It’s at least the third different software problem that has been discovered since the plane was grounded in March of last year following a pair of fatal crashes that claimed the lives of 346 people.
The new issue apparently has to do with a warning light that helps tell pilots when the trim system — a part of the plane that can lift or lower the nose — isn’t working. Federal Aviation Administration head Steve Dickson said during a talk in London on Thursday that the light was “staying on for longer than a desired period,” according to Bloomberg.
What’s worrisome about this new glitch is that it’s possibly a direct result of the fixes Boeing made to those previous flaws, according to Bloomberg, which reports that the trim system flaw “resulted from Boeing’s redesign the two flight computers that control the 737 Max to make them more resilient to failure.” The new glitch is also more directly related to the original problem that plagued the 737 Max.
An FAA spokesperson said that “Boeing should have details on any issues they are addressing” and provided The Verge with a mostly recycled statement about how “there is no set timeframe for when the aircraft will be cleared for return to passenger service.”
The agency added that the 737 Max “will be approved only after our safety experts are fully satisfied that all safety-related issues are addressed to the FAA’s satisfaction.”
Boeing did not immediately respond to a request for comment.
After years of promising increased transparency, Facebook is getting granular and showing you how it picks up and mashes together data about you from other companies. Facebook's new tool is indeed illuminating when it comes to getting a glimpse at who tracks you (spoiler: everyone). Its promises to give you a measure of control over the process, however, fall short.
Facebook this week launched an Off-Facebook Activity portal to give users a different and more detailed perspective on the data it hoovers up from other firms. Off-Facebook Activity is exactly what it sounds like: interactions you have with other entities, such as an app on your phone or a retailer you shop at, that Facebook receives data about. Facebook attaches that data to the rest of the information it has about you and uses it for marketing purposes.
More than two years ago, Apple informed the FBI that it planned to roll out end-to-end encryption for iCloud backups, according to Reuters. Apple ultimately dropped the plan at some point after the FBI objected, although the report notes that it is unclear if the federal agency was a factor in the decision.
A former Apple employee told Reuters that the company did not want to risk scrutiny from public officials for potentially protecting criminals, being sued for making previously accessible data out of reach of government agencies, or the move encouraging new legislation against encryption.
"They decided they weren't going to poke the bear anymore," the person said, after Apple's legal battle with the FBI in 2016 over unlocking an iPhone used by a shooter in the San Bernardino, California attack. In that case, the FBI ultimately found an alternative method of unlocking the iPhone.
Apple faces a similar standoff with the FBI over refusing to unlock two passcode-protected iPhones that investigators believe were owned by Mohammed Saeed Alshamrani, the suspect of a mass shooting at a Naval Air Station in Florida last month. Apple said it has provided the FBI with all data in its possession.
Apple has taken a hard line on refusing to create a backdoor into iOS that would allow the FBI to unlock password-protected iPhones to assist in their investigations, but it does provide data backed up to iCloud to authorities when lawfully requested, as outlined in its semiannual Transparency Reports.
Note: Due to the political nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.
Hundreds of law enforcement agencies across the US have started using a new facial recognition system from Clearview AI, a new investigation by The New York Times has revealed. The database is made up of billions of images scraped from millions of sites including Facebook, YouTube, and Venmo. The Times says that Clearview AI’s work could “end privacy as we know it,” and the piece is well worth a read in its entirety.
The use of facial recognition systems by police is already a growing concern, but the scale of Clearview AI’s database, not to mention the methods it used to assemble it, is particularly troubling. The Clearview system is built upon a database of over three billion images scraped from the internet, a process which may have violated websites’ terms of service. Law enforcement agencies can upload photos of any persons of interest from their cases, and the system returns matching pictures from the internet, along with links to where these images are hosted, such as social media profiles.
The NYT says the system has already helped police solve crimes including shoplifting, identify theft, credit card fraud, murder, and child sexual exploitation. In one instance, Indiana State Police were able to solve a case within 20 minutes by using the app.
The use of facial recognition algorithms by police carry risks. False positives can incriminate the wrong people, and privacy advocates fear their use could help to create a police surveillance state. Police departments have reportedly used doctored images that could lead to wrongful arrests, and a federal study has uncovered “empirical evidence” of bias in facial recognition systems.
Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:
While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.
One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
The telco "is asking creditors to help craft a turnaround deal that includes filing for bankruptcy by the middle of March, according to people with knowledge of the matter," Bloomberg wrote.
Frontier CEO Bernie Han and other company executives "met with creditors and advisers Thursday and told them the company wants to negotiate a pre-packaged agreement before $356 million of debt payments come due March 15," the report said. The move would likely involve Chapter 11 bankruptcy to let Frontier "keep operating without interruption of telephone and broadband service to its customers."