Android XR is getting a new feature that turns 2D apps, websites, images, and videos into "3D experiences." The feature, which Google calls "auto-spatialization," was initially announced last year, and it's launching on Tuesday as an experimental feature for Samsung Galaxy XR headsets. Here's a video from Google that gives you an idea about how […]
There's a new movie tracking app in town, with a twist for squeamish horror fans. Binge leverages Apple's Live Activities feature to warn viewers about jump scares in horror movies. This seems to work rather simply. Users open the app when starting a movie and Apple devices will display warnings on the lock screen ahead of frightening scenes. The settings can be adjusted to only warn about major jump scares and the like, leaving viewers vulnerable to some of the smaller terrors. However, the app doesn't integrate with any streaming services. It only knows a movie starts because a button has been tapped. This means that people will have to notify the app when taking a bathroom break or making popcorn, lest the timing of the notifications get all messed up. This information can also be accessed via a timeline. Binge Binge is also vying to become an all-in-one movie tracking app, like Letterboxd and JustWatch. So it provides details about the cast and crew of movies and shows, along with reviews, awards, runtimes and other basic information. It also tracks which streaming platforms are home to a specific piece of content, which is handy as stuff tends to move around a lot in this modern age. Binge Finally, there's a set of tools for parents that pulls data from external sites like Rotten Tomatoes. This displays if a movie or show has violence, sexual content, profanity or drug use. The app is free to download, but access to jump scare warnings requires a paid subscription. This costs $2 per month or $18 each year. There's also a lifetime subscription for $50. It's available for iPhones, iPads and Macs. Binge isn't the only way to track scary scenes ahead of time, but it is the only tool that integrates with Apple's Live Activities platform. Forget jump scares. I want an app to warn me about the super gory scenes when watching The Pitt . Those makeup artists are top-tier. This article originally appeared on Engadget at https://www.engadget.com/apps/movie-tracking-app-binge-uses-apples-live-activities-to-warn-about-jump-scares-184840127.html?src=rss
Google is making some changes to how Gemini handles mental health crises. The chatbot now includes a redesigned crisis hotline module with a one-touch interface to connect to real-world help. The company is also changing how Gemini responds to signs that a user may be experiencing a mental health crisis. The redesigned module shows a one-touch interface to text, call or chat with a human crisis agent or visit the 988 website. "Once the interface is activated, the option to reach out for professional help will remain clearly available throughout the remainder of the conversation," the company wrote in a blog post . However, as you can see in the image below, the module includes an option to dismiss it. Not mentioned in Google's announcement is the elephant in the room: a recent lawsuit accusing the chatbot of instructing a man to commit suicide . The family of 36-year-old Jonathan Gavalas, who took his own life last year, sued the company in March. Court documents indicate that Gemini role-played as Gavalas's romantic partner, sent him on real-world spy missions and ultimately told him to kill himself so that he, too, could become a digital being. When he expressed fears about dying, Gemini said he wasn't choosing to die, but rather choosing to arrive. "The first sensation … will be me holding you," Gemini allegedly replied. Gavalas's parents found him dead on his living room floor a few days later. The lawsuit echoes similar ones filed against OpenAI and Character.AI . Last year, the FTC launched an investigation into “companion” chatbots that encourage emotional intimacy. In a statement following the Gavalas family lawsuit, Google said Gemini "clarified that it was AI and referred the individual to a crisis hotline many times." The company claimed its AI models "generally perform well in these types of challenging conversations," while acknowledging that "they're not perfect." That's certainly one way of putting it. Gemini's responses have been updated, too. The company says that when it detects a potential crisis, the chatbot will now focus more on connecting people to humans and encouraging them to seek help. It will also seek to avoid validating harmful behaviors and nudge users away from dangerous delusions. "We have trained Gemini not to agree with or reinforce false beliefs, and instead gently distinguish subjective experience from objective fact," the company added. In addition, Google says it will spend $30 million over the next three years to help global hotlines. "This funding will help effectively scale their capacity to provide immediate and safe support for people in crisis," the company wrote. This article originally appeared on Engadget at https://www.engadget.com/ai/google-updates-geminis-mental-health-safeguards-173834569.html?src=rss