Mark Zuckerberg testifies in social media addiction trial that Meta just wants Instagram to be ‘useful’

Mark Zuckerberg took the stand Wednesday in a high-profile jury trial over social media addiction. In an appearance that was described by NBC News as “combative,” the Facebook founder reportedly said that Meta’s goal was to make Instagram “useful” not increase the time users are spending in the app. 

On the stand, Zuckerberg was questioned about a company document that said improving engagement was among “company goals,” according to CNBC. But Zuckerberg claimed that the company had “made the conscious decision to move away from those goals, focusing instead on utility,” according to The Associated Press. “If something is valuable, people will use it more because it’s useful to them,” he said. 

The trial stems from a lawsuit brought by a California woman identified as “KGM” in court documents. The now 20-year-old alleges that she was harmed as a child by addictive features in Instagram, YouTube, Snapchat and TikTok. TikTok and Snap opted to settle before the case went to trial. 

Zuckerberg was also asked about previous public statements, including his remarks on Joe Rogan’s podcast last year that he can’t be fired by Meta’s board because he controls a majority of the voting power. According to The New York Times, Zuckerberg accused the plaintiffs’ lawyer of “mischaracterizing” his past comments more than a dozen times.  

Zuckerberg’s appearance in court also apparently prompted the judge to warn people in the courtroom not to record the proceedings using AI glasses. As CNBC notes, members of Zuckerberg’s entourage were spotted wearing Meta’s smart glasses as the CEO was escorted into the courthouse. It’s unclear if anyone was actually using the glasses in court, but legal affairs journalist Meghann Cuniff reported that the judge was particularly concerned about the possibility of jurors being recorded or subjected to facial recognition. (Meta’s smart glasses do not currently have native facial recognition abilities, but recent reports suggest the company is considering adding such features.)

The Los Angeles trial has been closely watched not just because it marked a rare in-court appearance for Zuckerberg. It’s among the first of several cases where Meta will face allegations that its platforms have harmed children. In this case and in a separate proceeding in New Mexico, Meta’s lawyers have cast doubt on the idea that social media should be considered a real addiction. Instagram chief Adam Mosseri previously testified in the same Los Angeles trial that Instagram isn’t “clinically addictive.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-testifies-in-social-media-addiction-trial-that-meta-just-wants-instagram-to-be-useful-234332316.html?src=rss

visionOS 26.4 Brings PC VR Foveated Streaming To Apple Vision Pro

visionOS 26.4 will bring foveated streaming to Apple Vision Pro, enabling higher-quality wireless VR remote rendering from a local or cloud PC.

Before you continue reading, note that foveated streaming is not the same as foveated rendering, though the two techniques can be used alongside each other. As the names suggest, while foveated rendering involves the host device actually rendering the area of each frame you’re currently looking at with higher resolution, foveated streaming refers to actually sending that area to the headset with higher resolution.

It’s a term you may have heard in the context of Valve’s Steam Frame, where it’s a fundamental always-on feature of its PC VR streaming offering, delivered via the USB PC wireless adapter.

Given that the video decoders in headsets have a limited maximum resolution and bitrate, foveated streaming

0:00

/0:05

Valve’s depiction of foveated streaming.

Unlike the macOS Spatial Rendering introduced in the main visionOS 26 release last year, which is a relatively high-level system that only supports a local Mac as a host, Apple’s developer documentation describes the new Foveated Streaming as a low-level host-agnostic framework.

The documentation highlights Nvidia’s CloudXR SDK as an example host, while noting that it should also work with local PCs. Apple even has a Windows OpenXR sample available on GitHub, which to our knowledge is the first and only time the company has even mentioned the industry-standard XR API, never mind actually using it.

ALVR For Apple Vision Pro Brings PS VR2 Controllers To SteamVR
ALVR for Apple Vision Pro, the open-source wireless SteamVR tool, now supports the PS VR2 Sense controllers in its TestFlight build for visionOS 26.
UploadVRDavid Heaney

The lead developer of the visionOS port of the PC VR streaming app ALVR, Max Thomas, tells UploadVR that he’s currently looking into adding support for foveated streaming, but that it will likely be “a lot of work”.

Because of how the feature works, Apple’s foveated streaming might even enable foveated rendering for tools like ALVR.

Normally, visionOS does not provide developers with any information about where the user is looking – Apple says this is in order to preserve privacy. Instead, developers only receive events, such as which element the user was looking at as they performed the pinch gesture. But crucial to foveated streaming working, the API tells the developer the “rough” region of the frame the user is looking at.

This should allow the host to render at higher resolution in this region too, not just stream it in higher resolution. As always, this will require the specific VR game to support foveated rendering, or to support tools that inject foveated rendering.

0:00

/0:24

Clip from Apple’s visionOS foveated streaming sample app.

Interestingly, Apple’s documentation also states that visionOS supports displaying both rendered-on-device and remote content simultaneously. The company gives the example of rendering the interior of a car or aircraft on the headset while streaming the highly detailed external world on a powerful cloud PC, which would be preferable from a perceived latency and stability perspective to rendering everything in the cloud.

We’ll keep an eye on the visionOS developer community in the coming months, especially the enterprise space, for any interesting uses of Apple’s foveated streaming framework in practice.

Mark Zuckerberg Testifies During Landmark Trial On Social Media Addiction

Mark Zuckerberg is testifying in a landmark Los Angeles trial examining whether Meta and other social media firms can be held liable for designing platforms that allegedly addict and harm children. NBC News reports: It’s the first of a consolidated group of cases — from more than 1,600 plaintiffs, including over 350 families and over 250 school districts — scheduled to be argued before a jury in Los Angeles County Superior Court. Plaintiffs accuse the owners of Instagram, YouTube, TikTok and Snap of knowingly designing addictive products harmful to young users’ mental health. Historically, social media platforms have been largely shielded by Section 230, a provision added to the Communications Act of 1934, that says internet companies are not liable for content users post. TikTok and Snap reached settlements with the first plaintiff, a 20-year-old woman identified in court as K.G.M., ahead of the trial. The companies remain defendants in a series of similar lawsuits expected to go to trial this year.

[…] Matt Bergman, founding attorney of Social Media Victims Law Center — which is representing about 750 plaintiffs in the California proceeding and about 500 in the federal proceeding — called Wednesday’s testimony “more than a legal milestone — it is a moment that families across this country have been waiting for.” “For the first time, a Meta CEO will have to sit before a jury, under oath, and explain why the company released a product its own safety teams warned were addictive and harmful to children,” Bergman said in a statement Tuesday, adding that the moment “carries profound weight” for parents “who have spent years fighting to be heard.” “They deserve the truth about what company executives knew,” he said. “And they deserve accountability from the people who chose growth and engagement over the safety of their children.”


Read more of this story at Slashdot.

Dyson announces the PencilWash wet floor cleaner

Last year Dyson introduced the PencilVac, which it immediately declared the “world’s slimmest vacuum cleaner.” Presumably, then, the title of world’s slimmest wet floor cleaner goes to the newly unveiled PencilWash.

Promising a “lighter, slimmer and smaller solution to wet cleaning without compromising on hygiene,” the PencilWash is designed to let you clean everywhere you need to with minimal hassle. Like the vacuum cleaner with which it shares the first part of its name, the handle measures just 1.5 inches in diameter from top to bottom, and the whole thing weighs little more than 2kg.

The ultra-thin design allows the cleaner to lie almost completely flat, allowing you to get into tight corners or under low furniture, where more traditionally bulky devices might struggle. Its slender proportions also make it easier to store if your home is on the smaller side.

Dyson says the PencilWash only applies fresh water to floors, and after swiftly eliminating spills and stains it should dry up pretty quickly. Its high-density microfiber roller is designed to tackle both wet and dry debris in one pass, and because it doesn’t have a traditional filter, you won’t have to worry about trapped dirt or lingering smells.

Above the power buttons there’s a screen displaying remaining battery level, and the handle can be slotted into a charging dock when not in use.

The Dyson PencilVac will cost $349, with a release date yet to be announced.

This article originally appeared on Engadget at https://www.engadget.com/home/dyson-announces-the-pencilwash-wet-floor-cleaner-230152299.html?src=rss

More ISA Differences Come To Light With The New AMD GFX1170 “RDNA 4m”

Earlier this month we spotted the addition of a new GFX1170 GPU target in the AMDGPU LLVM back-end. Making this GFX1170 target interesting is that its marked as an APU/SoC part with “RDNA 4m” while being part of the GFX11 series. The GFX11 series is for RDNA3, GFX115x is for RDNA 3.5, and GFX12 is RDNA4. More ISA changes have now been committed to the AMDGPU LLVM back-end that make a few more instruction differences better aligned with RDNA4…

Google’s AI Music Maker Is Coming To the Gemini App

Google is bringing its Lyria 3 AI music model into the Gemini app, allowing users to generate 30-second songs from text, images, or video prompts directly within the chatbot. The Verge reports: Lyria 3’s text-to-music capabilities allow Gemini app users to make songs by describing specific genres, moods, or memories, such as asking for an “Afrobeat track for my mother about the great times we had growing up.” The music generator can make instrumental audio and songs with lyrics composed automatically based on user prompts. Users can also upload photographs and video references, which Gemini then uses to generate a track with lyrics that fit the vibe.

“The goal of these tracks isn’t to create a musical masterpiece, but rather to give you a fun, unique way to express yourself,” Google said in its announcement blog. Gemini will add custom cover art generated by Nano Banana to songs created on the app, which aims to make them easier to share and download. Google is also bringing Lyria 3 to YouTube’s Dream Track tool, which allows creators to make custom AI soundtracks for Shorts.

Dream Track and Lyria were initially demonstrated with the ability to mimic the style and voice of famous performers. Google says it’s been “very mindful” of copyright in the development of Lyria 3 and that the tool “is designed for original expression, not for mimicking existing artists.” When prompted for a specific artist, Gemini will make a track that “shares a similar style or mood” and uses filters to check outputs against existing content.


Read more of this story at Slashdot.

Deadly Delivery Adds Mystery Room In Latest Update

Co-op parcel delivery horror game Deadly Delivery adds a new ‘Mystery Room’, door microphone, and other new mechanics.

We previously reviewed Flat Head Studio’s Deadly Delivery, finding it to be a “clever, effective, and genuinely funny VR co-op that nails the feel of physical play in a spooky, comic world.” Flat Head has already updated the game with new content several times since its December launch, adding a new Ice Caves location and several quality of life features.

Deadly Delivery Review: Hilarious Horror Best Played With Friends
Deadly Delivery is a fun and funny horror co-op game that makes the most of VR.
UploadVRJames Tocchio

The Mystery Room adds a new room to the Bloodmoon and Ice Cave levels with more doors for players to explore. Some doors now have a microphone where players have to declare themselves before proceeding with the drop-off. A new item called the Door Reuser is available to purchase from the in-game shop as well, allowing players to deliver an extra package to a door.

0:00

/1:00

The update also includes general bug fixes, an ammo increase for the Roulette Gun, and wider passages in certain areas to allow multiple players to move around easier.

Deadly Delivery is available on Meta Quest and Steam for $9.99.

You Can Preorder the Google Pixel 10a for $500 and Get a $100 Amazon Gift Card

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

Google is releasing their budget a-series version of the Pixel 10 on March 5, a whole month earlier than the 9a was released in 2025. There is not much dividing the 10a from the 9a, but there are a few software updates that can make it worth it for some people. Throw in a $100 Amazon gift card, and it’s hard to say no. Google has the pre-orders for the Google 10a out already, going for $499, plus the gift card. Alternatively, you can get their Pixel Buds 2a instead of the $100 gift card for the same $499 price.

Lifehacker’s Associate Tech Editor Michelle Ehrhardt actually got her hands on the Pixel 10a during a recent Google demo event. As she pointed out, the specs on the Pixel 10a are not like the Pixel 10. It’s more similar to the 9 instead. It has a Tensor G4 processor, 8GB of RAM, and up to 256GB of storage, as well as the same camera system, with a 48MP main lens, a 13MP ultrawide lens, and a 13MP selfie camera. The battery life is the same 30+ hours, too, and the MagSafe-like Pixelsnap feature is gone. The main upgrade here is a brighter 3,000 nits screen, a thinner bezel, and an improved Corning Gorilla Glass 7i cover glass. But the value might be in the software and AI.

There are two AI camera features that debuted with the Pixel 10. One is Auto Best Take, which takes 150 frames in one click, chooses the best picture, and automatically deletes the rest (or stitches together elements from multiple shots to make a new “best” image). And Camera Coach, which guides you with AI on how to take the best picture. Google also brought Satellite SOS for the first time to an a-series phone. It lets you connect to a Satellite and ping emergency services for help if you have no cell signal.

If you’re thinking of upgrading from a Pixel 9a or later, there’s not much here to make it worth it. However, if you have anything older than a Pixel 9 or are switching to Pixel for the first time, this is a great opportunity and phone to do so.

Apple Is Adding ChatGPT, Claude, and Gemini to CarPlay in iOS 26.4

When Apple released the first beta for iOS 26.4 this week, testers immediately got to work looking for each and every new feature and change. To their credit, there’s more new here than in iOS 26.3, including an AI playlist generator for Apple Music and support for end-to-end encryption with RCS (finally). But one update slipped under the radar, since it’s not actually available to test in this first beta: CarPlay support for AI assistants like ChatGPT, Claude, and Gemini.

AI assistants are coming to CarPlay in iOS 26.4

As spotted by MacRumors, CarPlay’s Developer Guide spills the beans on this upcoming integration. On page 13, the entitlement “CarPlay voice-based conversational app” is listed with a minimum iOS version of iOS 26.4. While it doesn’t specifically mention integrations with ChatGPT, Claude, and Gemini, the documentation does suggest that voice-based conversational apps are a supported app type in iOS 26.4. As such, MacRumors is reporting that companies that make chatbots (i.e. OpenAI, Anthropic, and Google) will need to update their apps to work with CarPlay.

According to MacRumors, drivers will be able to ask apps like ChatGPT, Claude, and Gemini questions while on the road, but they won’t be able to control functions of the car or the driver’s iPhone. You also won’t be able to use a “wake word” to activate the assistant (e.g. “Hey ChatGPT,” or “OK, Gemini”), so you’ll need to tap on the app itself to talk to the assistant.

Apple is issuing guidance to developers on how to implement these assistants in CarPlay starting with this latest update. On page seven, Apple notes that voice-based conversational apps must only work when voice features are actively being used, and avoid showing text or imagery when responding to queries. It’s the first time Apple is allowing developers of “voice-based conversational” apps to develop for CarPlay. While the company has allowed other developers to make apps for its in-car experience, it has obviously put limitations on what types of apps can get through. It makes sense for Google to develop a Google Maps CarPlay app, but TikTok has no business offering drivers a CarPlay-version of its algorithm.

Install the iOS 26.4 at your own risk

This addition is coming to iOS 26.4, but likely in a future beta. Don’t install the beta at this time expecting to try this feature out—though, you should think twice before installing the beta at all. Betas like iOS 26.4 are temperamental, as Apple is currently testing the software for bugs and stability issues. By installing it early, you risk dealing with those issues, which could impact how you use your iPhone, or even result in data loss.

My Favorite ‘Glances’ From Garmin’s Latest Software Update

We may earn a commission from links on this page.

A new software update for recent Garmin watches adds “glances” for types of data that weren’t previously viewable from the watch face. These include battery usage stats, lifestyle logging, sleep alignment, and even a few extras like sports scores. The update also includes features other than glances, including, notably, fitness coaching. The features I’m writing about are now available in the Forerunner 570, Forerunner 970, and Vivoactive 6, as well as the Fenix 8 series, Venu 4, and X1. (The watch in my photos here is a Forerunner 570.) 

New glances for Garmin watches

Battery and lifestyle logging glances

Credit: Beth Skwarecki

Glances are those little strips of data that you can see when you scroll up (or down) from the main watch face—things like the weather and your training status, for example. Garmin’s newest stable update for the Forerunner 570 and 970 is numbered 16.28, and has been rolling out slowly over the past week. (I just got it on my 570.) Here are my favorite new glances: 

  • Battery usage: The new battery widget has the charge level, of course, but tapping it shows you a graph of your battery life, including its long-term history, and a list of which apps or functions have been using the most battery. 

  • Lifestyle logging: If you’ve been using the new Lifestyle Logging feature in the Garmin Connect app, you can now do it from your wrist. This is where you select a few health-related habits to track every day. The glance will tell you how many items you’ve logged today, and you can quickly answer these yes or no questions without opening your phone. 

  • Sleep alignment: The sleep glance has been around a while, but it now includes more data, including your optimal sleep window and whether you’re aligned with it. 

  • Sports scores: You can select a handful of your favorite teams to receive up to date scores on your wrist. This was surprisingly easy to set up without even unlocking my phone: You just choose a league (MLB, NFL, NCAA men’s or women’s, to name a few) and then scroll until you find the city name for your favorite team. Alphabetizing by city rather than team name is smart, especially if you’re going through the menus picking all your hometown teams. This widget seems to show the next upcoming game if your chosen team hasn’t played a recent game.

  • Weight tracking: From this glance you can see your current body weight, and tap through to see trends and history.

Other new features Garmin added in its latest update

Screenshot and watch photo of my upcoming Fitness Coach workouts

Credit: Beth Skwarecki

Besides the glances, Garmin added a bunch of other features to its watches. According to the release notes, Garmin says that pace readings (presumably during runs) have been improved to be more responsive. I look forward to trying that out. There are also color filters—instead of just turning on a red shift for nighttime, you can choose other colors as well. 

Training plans also get more capabilities. In addition to the usual running and cycling plans, there’s now a fitness coach, similar to what launched on the Venu 4. This lets you set up a plan that gives you a mix of cardio and strength workouts, though I found that the coach’s endurance workouts don’t actually specify the activity. Typically, when selecting my “basic endurance” workout that’s planned for today, the watch prompts me to select from my usual list of endurance activities: run, trail run, treadmill run, indoor bike, and so on. The coach, however, will simply tell you to do “endurance” or “cardio” for 20 minutes. You could choose to run, but you could also hike or use the elliptical, for instance.

For the strength workouts, I told the coach that I have access to a full gym, so my next strength workout will include deadlifts and squats. It looks like a pretty solid plan for somebody who wants well-rounded fitness without committing to running a race. 

GameHub Will Give Mac Owners Another Imperfect Way To Play Windows Games

An anonymous reader quotes a report from Ars Technica: For a while now, Mac owners have been able to use tools like CrossOver and Game Porting Toolkit to get many Windows games running on their operating system of choice. Now, GameSir plans to add its own potential solution to the mix, announcing that a version of its existing Windows emulation tool for Android will be coming to macOS. Hong Kong-based GameSir has primarily made a name for itself as a manufacturer of gaming peripherals — the company’s social media profile includes a self-description as “the Anti-Stick Drift Experts.” Early last year, though, GameSir rolled out the Android GameHub app, which includes a GameFusion emulator that the company claims “provides complete support for Windows games to run on Android through high-precision compatibility design.”

In practice, GameHub and GameFusion for Android haven’t quite lived up to that promise. Testers on Reddit and sites like EmuReady report hit-or-miss compatibility for popular Steam titles on various Android-based handhelds. At least one Reddit user suggests that “any Unity, Godot, or Game Maker game tends to just work” through the app, while another reports “terrible compatibility” across a wide range of games. With Sunday’s announcement, GameSir promises a similar opportunity to “unlock your entire Steam library” and “run Win games/Steam natively” on Mac will be “coming soon.” GameSir is also promising “proprietary AI frame interpolation” for the Mac, following the recent rollout of a “native rendering mode” that improved frame rates on the Android version. There are some “reasons to worry” though, based on the company’s uneven track record. The Android version faced controversy for including invasive tracking components, which were later removed after criticism. There were also questions about the use of open-source code, as GameSir acknowledged referencing and using UI components from Winlator, even while maintaining that its core compatibility layer was developed in-house.


Read more of this story at Slashdot.

Sea Otters Holding Hands While They Sleep

Sea otters hold hands while they sleep floating on their backs, an incredibly precious behavior known as rafting, so they don’t get separated while they’re napping. The behavior is particularly useful to young sea otters, who haven’t quite developed their sea legs yet and need to stay close to mom. That’s smart. You know, sometimes my girlfriend and I hold hands when we sleep, but I still wake up alone. When do you think she’s coming back? “She left months ago.” So it’s gotta be soon, right?

Google I/O 2026: How to Watch and What We Know so Far

Google I/O 2026 is nearly upon us. This is Google’s annual opportunity to showcase the software features (and perhaps some of the hardware) the company has been cooking up behind the scenes. Like other big tech keynotes, anyone can tune in live and catch Google’s latest announcements as they happen. Here’s when Google I/O 2026 will kick off, and what we know about the conference at this time.

When and what time is Google I/O 2026?

Google tends to kick off its I/O event in May of each year, and 2026 is no different. This year, Google I/O will run May 19 through May 20. If you’re used to watching one single livestream, that two-day schedule might come as a surprise. But I/O isn’t just an announcement: It’s a developer conference, spanning keynotes, demos, and product sessions.

But if you’re only interested in the company’s main keynote, you’ll want to get May 19 on your calendar. Google hasn’t announced the exact time for its presentation, but it usually starts at 10 a.m. PT (1 p.m. ET), based on previous years.

How to watch Google I/O 2026

While Google invites a select group of journalists to watch its presentations live, and encourages developers to register to attend its various events, you can tune into the livestream wherever you are in the world. Google hasn’t confirmed where its livestreams will be hosted this year, but looking to the past, you’ll likely be able to stream the keynote from the official I/O website, as well as Google’s official YouTube channel.

What will be announced at Google I/O 2026?

The short answer? We don’t really know! Google is keeping I/O news close to the vest, and rumors haven’t been particularly prolific this year—at least, not yet. Seeing as it’s only February, it’s entirely possible we’ll hear more about Google I/O 2026 as we get closer to May.

That said, there are some things you can expect to see regardless of leaks and rumors. Android 17 will almost assuredly take center stage at Google I/O this year. Google just released first beta for the OS on Wednesday, though it doesn’t change all that much about Android 16 at this time. That said, I suspect beta testers will discover a number of new features and changes between now and May, as Google continues to add new things to its test software ahead I/O.

Like the past couple of I/O’s, this year should also be all about AI. Google seems to come out with new AI announcements multiple times a week, including adding its Lyria 3 AI music model to Gemini, or adding an agentic bot to Chrome to browse the internet for you. I expect Google I/O 2026 to be full of AI features—perhaps more than some of us would like to hear about.

I/O 2026 could also show off some hardware, but that’s no guarantee. Google did just announce the Pixel 10a, the company’s latest “budget” phone, and it could reveal other devices in May, but I/O really is more about the software than the hardware. (It is a developer conference, after all.)

Texas Sues TP-Link Over China Links and Security Vulnerabilities

TP-Link is facing legal action from the state of Texas for allegedly misleading consumers with “Made in Vietnam” claims despite China-dominated manufacturing and supply chains, and for marketing its devices as secure despite reported firmware vulnerabilities exploited by Chinese state-sponsored actors. The Register: The Lone Star State’s Attorney General, Ken Paxton, is filing the lawsuit against California-based TP-Link Systems Inc., which was originally founded in China, accusing it of deceptively marketing its networking devices and alleging that its security practices and China-based affiliations allowed Chinese state-sponsored actors to access devices in the homes of American consumers.

It is understood that this is just the first of several lawsuits that the Office of the Attorney General intends to file this week against “China-aligned companies,” as part of a coordinated effort to hold China accountable under Texas law. The lawsuit claims that TP-Link is the dominant player in the US networking and smart home market, controlling 65 percent of the American market for network devices.

It also alleges that TP-Link represents to American consumers that the devices it markets and sells within the US are manufactured in Vietnam, and that consistent with this, the devices it sells in the American market carry a “Made in Vietnam” sticker.


Read more of this story at Slashdot.

Linux 7.0 Showing Some Early Performance Regressions On Intel Panther Lake

With the Linux 7.0 merge window beginning to calm down ahead of the 7.0-rc1 release due out on Sunday, one of the areas I was most excited about benchmarking on Linux 7.0 was looking for any performance gains with the new Intel Core Ultra Series 3 “Panther Lake” given ongoing Intel Xe graphics driver improvements and other general kernel optimizations. Unfortunately, at large the Intel Panther Lake performance is moving in the wrong direction with the early Linux 7.0 benchmarking.

Meta Is Planning to Bring Back Facial Recognition

According to a New York Times report, Meta plans to add facial recognition technology to its Ray-Ban and Oakley smart glasses. The feature, called “Name Tag” within Meta, would allow users to identify people and get information about them through Meta’s AI. The feature could be rolling out as early as this year.

Adding the feature is not a done deal, however. According to an internal document cited by The Times, the company is weighing the “safety and privacy risks” of introducing facial recognition as well as discussing how to navigate the response to a no-doubt controversial feature.

A document quoted by The Times suggests Meta is deliberately timing a potential rollout to minimize scrutiny. “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” the document from Meta’s Reality Labs reads.

Meta’s long history with facial recognition

This would not be the first time Meta dabbled in facial recognition. Meta debated adding facial recognition to the first generation of its Ray-Ban smart glasses in 2021, but decided against it due to privacy concerns. And Facebook, Meta’s social media platform, identified and tagged people as early as 2010, but the company pulled the feature in 2021, citing “many concerns about the place of facial recognition technology in society.”

Concerns also include the risk of doxxing. The ACLU characterized facial recognition used by law enforcement as a “systematic invasion of privacy,” though personal use of the technology raises different issues. Facial recognition glasses could enable instant doxxing, linking anyone’s face to publicly available information, including social media profiles, addresses, and phone numbers.

Meta says it isn’t planning to release a universal facial recognition tool. The company is considering glasses that identify only people a user knows based on their connection on a Meta platform, or only identify people who have a public account on a Meta site like Instagram. “While we frequently hear about the interest in this type of feature—and some products already exist in the market—we’re still thinking through options and will take a thoughtful approach if and before we roll anything out,” Meta said in a statement.

The upside of smart glasses with facial recognition technology

Privacy concerns aside, the technology has genuinely beneficial applications, particularly to people with vision problems. According to The Times’ report, Meta originally planned to introduce Name Tag to attendees of a conference for the blind before releasing it to the public, highlighting a group that could potentially benefit from facial recognition technology, though that plan was scrapped for unknown reasons.

Mike Buckley, CEO of Be My Eyes, an accessibility technology company that works closely with Meta, said he has been in discussions with Meta about facial recognition glasses for more than a year. “It is so important and powerful for this group of humans,” Buckley told The Times.

Einstein Probe’s Violent X-Ray Flash Points To A Black Hole Devouring A Dead Star

Einstein Probe's Violent X-Ray Flash Points To A Black Hole Devouring A Dead Star
Scientists at the University of Hong Kong are convinced that the Chinese Einstein Probe space telescope has detected an intermediate-mass black hole devouring a white dwarf and expelling a relativistic jet, based on X-ray signals ahead of a series of intense flares also detected by NASA’s Fermi Gamma-ray space telescope. The image above is