Google Assistant Broadcast seemingly needs two steps to comply

0
[ad_1]

Google Assistant’s “Broadcast” feature needs users to complete two steps before their message is delivered. The nifty and useful feature may have become more cumbersome. Although the added step might be a bug or a necessity, it certainly isn’t meant to improve security.

Has Google added an extra step for Broadcast?

Google Assistant includes multiple devices such as smart speakers, smart TVs, and even smartwatches. These devices, often scattered throughout a smart home, can be turned into a Public Address (PA) system.

The Broadcast feature essentially allows users to turn connected smart devices into an impromptu intercom system. To use the Broadcast PA system, users had to merely say, “Hey Google, Broadcast”, followed by the message.

Simply put, just a single instruction, followed by the intended message, was enough. Google Assistant would immediately obey with an “Alright, Broadcasting now”. However, it appears users need to add an extra step before Google Assistant delivers the message to connected devices.

 Why should users wait for Home Assistant’s confirmation?

Google Assistant users have to reportedly pause after saying “Hey Google, Broadcast”. The AI system then responds with, “What’s the message?” Users can then convey the desired message that has to be sent out to connected devices.

Although not a huge inconvenience, the new method does add a few extra seconds to the process, which was once nearly instantaneous. Interestingly, Google hasn’t added a pause to boost security or prevent users from accidentally broadcasting undesired messages. In other words, Google might not have intentionally split up the process.

Several users have observed that Google Assistant has trouble understanding vocal instructions. There have been complaints about the AI assistant not grasping most “broadcast” commands that are immediately followed by the message.

There have been several instances wherein Google Assistant has initiated a Google Search instead of broadcasting the message. Incidentally, the assistant is still delivering several common messages such as “dinner is ready”, without any hiccups.

Hence, it is likely that Google Assistant needs users to split the instructions so that it understands they intend to broadcast a message. Hence, it will know that any audio that follows the “Hey Google, Broadcast” keywords will have to be delivered, and not searched.

Google had recently restricted the ability to call a device or broadcast a message to a Google Family Group. Users can only broadcast messages to devices that are on the premises where the message originated.


[ad_2]
Source link

Apple Vision Pro gets support for more 8BitDo controllers

0
[ad_1]

The gaming possibilities on your Apple Vision Pro are getting a boost thanks to the support for more 8BitDo controllers. If you’ve been looking for better controller options to play with Apple’s AR headset, you’re in luck.

8BitDo is one of the most popular gaming accessory manufacturers. Its controller options not only get the job done but also come in multiple nostalgic designs reminiscent of classic consoles. Now, a new set of 8BitDo controllers joins the list of peripherals compatible with the Apple Vision Pro. These accessories have Apple approval, which is important so that they work immediately after pairing them via Bluetooth without further complications.

These are the 8BitDo controllers now compatible with Apple Vision Pro

The list of 8BitDo controllers compatible with Apple Vision Pro now includes the following models: Ultimate 2.4G, Pro 2, SN30 Pro, SN30 Pro for Android (Xbox licensed), SN30 Pro+, Lite 2, Lite SE, and N30 Pro 2. Additionally, the company made its Retro Mechanical Keyboard compatible. This means that the new list of supported accessories will enhance both gaming and productivity.

This way, the 8BitDo controller models mentioned above join the list of current MFi accessories already compatible with the Apple Vision Pro. As of today, you can play most iPad games on Apple’s headset. In addition, it supports Xbox Cloud Gaming and Nvidia GeForce Now. Therefore, there is a good catalog of both casual and AAA titles to enjoy.

It’s noteworthy that 8BitDo had already announced in 2023 support for multiple accessories in Apple products. However, it aimed more at the segment of smartphones, tablets, and Mac computers. According to the Engadget report, this was possible thanks to both the iOS 16.3/iPadOS 16.3/tvOS 16.3/macOS 13.2 updates and new controller firmware.

It is already well known that the Apple Vision Pro is not the most comfortable AR headset in the world. This means that it would not be the best idea to use them for long gaming sessions. Even so, it is good to have support for more controller options available, also thinking about future generations.

Apple Vision Pro 8BitDo controller AH
Source: 8BitDo

[ad_2]
Source link

Vivo X100s Pro certified on Google Play Console with key specs

0
[ad_1]

Recent reports suggest that the Vivo X100s and X100s Pro will be the next models in the series. As their names indicate, they will be improved versions of the X100 and X100 Pro that we already know. The arrival of a Vivo X100 Ultra is also expected later.

Previously, the Vivo X100s had been detected on certification platforms, and now it is the turn of the “Pro” model. As spotted by 91 Mobiles, the Vivo X100s Pro is already listed on the Google Play Console. The listing includes some key specifications, and even a render confirming its design.

Here’s what the Google Play Console reveals about the Vivo X100s Pro

The phones in the “Vivo X” series stand out, especially for their impressive cameras, premium design, and high performance. The “s” versions usually arrive a few months later, maintaining all the elements of the original model, but offering a more powerful chip. However, this time the approach may be slightly different.

The Vivo X100s Pro was certified on the Google Play Console with model number PD2324, powered by the MediaTek MT6989 SoC. The chip model refers to the Dimensity 9300, the same hardware present in the original X100 Pro. It’s noteworthy that the base Vivo X100s would also use the same chip.

Anyway, it is also possible that it has the Dimensity 9300 Plus chipset. Previously, the device appeared in Geekbench listings with more CPU details. At that time, there was a difference in the clock speed of its main core (3.4GHz) compared to that of the standard Dimensity 9300 (3.25GHz).

Other specs revealed by the listing include a Mali G720 GPU, Android 14, 16GB of RAM, and a 1260 x 2800 px screen resolution.

This is what the new device looks like

Regarding the design, it seems that it is where the brand will make the least changes. The render shows a look quite similar (or identical) to that of the current X100 series. That is, there is a prominent circular camera module in its rear area with three sensors. The module also includes “ZEISS”, highlighting the nature of its optics and the collaboration with the traditional camera company. The render shows the device in light blue, but it would arrive in at least three colors.

To top it off, there is still no official confirmation of the launch date of the series. However, the report suggests that it could be announced in India alongside the Vivo X Fold 4 and the Vivo Pad 3 this April.


[ad_2]
Source link

Vulnerabilities Exposed Hugging Face to AI Supply Chain Attacks

0
[ad_1]
Hugging Face Platform Vulnerable to AI Supply Chain Attacks

Cybersecurity firm Wiz.io found that AI-as-a-service (aka AI Cloud) platforms like Hugging Face are vulnerable to critical risks, which allow threat actors to escalate privileges, gain cross-tenant access, and potentially take over continuous integration and continuous deployment (CI/CD) pipelines. 

Understanding The Problem

AI models require a strong GPU, often outsourced to AI service providers similar to consuming cloud infrastructure from AWS/GCP/Azure. Hugging Face’s service is called Hugging Face Inference API.

Wiz Research could compromise the service running custom models by uploading their malicious model and using container escape techniques, allowing cross-tenant access to other customers’ models stored in Hugging Face.

The platform supports various AI model formats, two prominent ones being PyTorch (Pickle) and Safetensors. Python’s Pickle format is known for being unsafe and allowing remote code execution upon deserialization of untrusted data although Hugging Face assesses Pickle files uploaded to their platform, highlighting those they deem dangerous.

However, researchers cloned a legitimate Pickle-based model (gpt2), modified it to run a reverse-shell upon loading, and uploaded the hand-crafted model as a private model. They interacted with the model using the Inference API feature, obtaining a reverse shell and discovered that crafting a PyTorch model that executes arbitrary code is straightforward, whereas uploading their model to Hugging Face allowed them to execute code inside the Inference API.

Potential Risks

The potential impact is devastating as attackers could access millions of private AI models and apps. Two critical risks include shared inference infrastructure takeover risk, where malicious models run untrusted inference infrastructure, and shared CI/CD takeover risk, where malicious AI applications may attempt to take over the pipeline and execute a supply chain attack after taking over the CI/CD cluster.

Furthermore, adversaries can attack AI models, AI/ML applications, and inference infrastructures using various methods. They can use inputs that cause false predictions, incorrect predictions, or malicious models. AI models are often treated as black-box and used in applications. However, there are few tools to verify the integrity of a model, so developers must be cautious when downloading them.

“Using an untrusted AI model could introduce integrity and security risks to your application and is equivalent to including untrusted code within your application” Wiz Research’s report explained.


 

Hugging Face- Wiz Research Join Hands

Open-source artificial intelligence (AI) hub Hugging Face and Wiz.io have collaborated to address the security risks associated with AI-powered services. The joint effort highlights the importance of proactive measures to ensure the responsible and secure development and deployment of AI technologies.

Commenting on this, Nick Rago, VP of Product Strategy at Salt Security added that Securing the critical cloud infrastructure that houses AI models is crucial and the findings by Wiz are significant. It is also imperative that security teams recognize the vehicle in which AI is trained and serviced is an API, and rigorous security must be applied at that level to ensure the security of AI supply chains.”

A Concerning Scenario

This discovery comes at a time when concerns are already raised regarding data safety under AI-based tools. The AvePoint survey shows that less than half of organizations are confident they can use AI safely, with 71% concerned about data privacy and security before implementation, and 61% worried about internal data quality.

Despite the widespread use of AI tools like ChatGPT and Google Gemini, fewer than half have an AI Acceptable Use Policy. Additionally, 45% of organizations experienced unintended data exposure during AI implementation.

The widespread adoption of AI across various industries necessitates strong security measures. These vulnerabilities could potentially allow attackers to manipulate AI models, steal sensitive data, or disrupt critical operations.

  1. Airbus EFB App Vulnerability Risking Aircraft Data
  2. Vulnerability Exposed Ibis Budget Guest Room Codes
  3. CISA Urges Patching Microsoft SharePoint Vulnerability

[ad_2]
Source link

Samsung details the power of Gorilla Armor on the Galaxy S24 Ultra

0
[ad_1]

Gorilla Armor protection is the most game-changing upgrade on Samsung’s latest flagship phone, the Galaxy S24 Ultra. It marks a significant uptick in the toughness and optical functionality of the phone over previous flagships. And now Samsung is going over it in more detail.

Gorilla Armor: Samsung unveils the secret behind the Galaxy S24 Ultra’s glare-free screen

Despite all of its buzz, both Samsung and Corning have hidden details about the Gorilla Armor display. However, Samsung has come out recently explaining how Gorilla Armor functions, how incredibly scratch-resistant it is, and its detailing anti-reflective properties.

In a video, Samsung shows how Gorilla Armor uses an intricate layering process involving the application of nanometre-thick coatings to attain greater scratch resistance and optical clarity. This results in increased lifespan and reduced glare by 75% for better readability under different lighting environments.

Because this material incorporates layering during production, the Galaxy S24 Ultra screen shows amazing resilience to scratches. The vacuum deposition systems used in an ultra-clean environment further enhance the display’s resistance to scratches compared to traditional cover glasses by more than four times.

Corning’s Gorilla Armor is three times better at drop tests

Additionally, Samsung’s rigid tests show that Gorilla Armor is three times better at drop tests than traditional cover glass. Moreover, it blocks reflectance at the same time, which allows users to watch videos even under direct sunshine or other adverse lighting conditions.

Samsung’s inclusion of Gorilla Armor on the Galaxy S24 Ultra is a testament to the company’s ongoing innovation. It offers not only protection against scratches and drops but also improved display clarity and readability, making it easier for users to view content no matter their location.

Samsung phones are getting popular among consumers for productivity as well as entertainment; therefore features like Gorilla Armor help prolong the device’s life and enhance user satisfaction. With its advanced display technology, Galaxy S24 Ultra sets new benchmarks in durability and performance among premium phones.

Gorilla Armor, provided only by Samsung for now, is a huge step forward in display protection while giving customers peace of mind and a better viewing experience on Galaxy S24 Ultra.


[ad_2]
Source link

New Phishing Scam Drops Byakugan Malware via Fake PDF

0
[ad_1]
Beware the Blur: Phishing Scam Drops Byakugan Malware via Fake PDF

Cybersecurity firm Fortinet alerts users of a phishing scam campaign distributing the Byakugan malware. This malware steals sensitive information and grants attackers remote access to infected Windows devices.

Malware Found in PDF File:

In January 2024, FortiGuard Labs discovered a PDF file in Portuguese language distributing Byakugan, a multi-functional malware. Researchers found a blurred table in the PDF and instructions for the victims to click a malicious link to view the content.

Beware the Blur: Phishing Scam Drops Byakugan Malware via Fake PDF
Screenshot of the PDF files used in the attack and the installer embedded in the downloader (Credit: Fortinet)

Once clicked, the downloader drops a file titled require.exe, which is its copy. Then a clean installer is downloaded to the temp folder followed by a DLL, which is executed via DLL-hijacking to run require.exe to download the main module. 

The downloader, named “require.exe” and located in the temp folder, executes the copy and not the Reader_Install_Setup.exe, and exhibits different behaviour in both files. Byakugan’s main module is downloaded from thinkforce.com, a C2 server that may also serve as an attacker’s control panel, with a login page on port 8080.

AhnLab SEcurity Intelligence Center (ASEC) also discovered an Infostealer disguised as an Adobe Reader installer through a fake PDF file in Portuguese, urging users to download Adobe Reader, which led to the execution of a malicious file Reader_Install_Setup.exe.

It further creates two malicious files and runs a Windows system file, msdt.exe as an administrator, loading the malicious BluetoothDiagnosticUtil.dll and loading the malicious DLL file. The threat actor can bypass User Account Control (UAC) via DLL hijacking. 

Byakugan Malware Key Features

Byakugan is a node.js-based malware that uses OBS Studio to monitor the target’s desktop and perform various functions. It has several libraries, including a screen monitor, miner, keylogger, file manipulation, and browser information stealer. 

Moreover, Byakugan can choose between mining with CPU or GPU to prevent system overloading and downloads from popular miners like Xmrig, t-rex, and NBMiner. It also stores data in the kl folder and can steal information about “cookies, credit cards, downloads, and auto-filled profiles,” researchers wrote.

Byakugan also has anti-analysis features, such as pretending to be a memory manager and setting the path to the Windows Defender’s exclusion path. Additionally, it drops a task scheduler configuration file into the Defender folder, enabling it to execute automatically when starting up. However, this newer variant does not download the software from its domain.

Beware the Blur: Phishing Scam Drops Byakugan Malware via Fake PDF
Infection flow (Credit: Fortinet)

How to Stay Safe?

Threat actors are using both clean and malicious components in malware, such as Byakugan, making detection difficult, FortiGuard researchers noted, Therefore, to stay protected from phishing attacks and such deceptive malware, users must be cautious with emails, and verify sender legitimacy.

Additionally, use strong passwords and two-factor authentication, keep software updated, and prefer installing security software that can detect and block phishing emails/malware. Avoid clicking on links or downloading attachments from suspicious emails, and contacting the sender directly.

  1. Tycoon Linked to Phishing Attacks on US Schools
  2. Microsoft Warns of New Tax Returns Phishing Scam
  3. Dropbox Abused in Phishing Scam to Steal SaaS Logins
  4. New iMessage Phishing Scam Hits Postal Service Users
  5. Phishing Scam Hooks META Businesses with Trademark Threats

[ad_2]
Source link

Gmail on Android gets AI-powered summaries and Gemini toggle

0
[ad_1]

Google’s Gmail app for Android is getting a major upgrade that will bring it closer to its desktop equivalent. The update introduces AI-based email summaries. The new feature called Summarize this email will provide users with short descriptions of long emails or threads, similar to the web version of Gmail.

Gmail for Android will automate email summaries with a single Summarize this email button

The new Summarize this email button, located below the title/subject line of an email, enables users to quickly understand complicated messages. Although in its current stage, it doesn’t do anything when pressed, the button will improve efficiency while reading emails once fully operational.

Piunikaweb projects that, while it may operate differently on mobile apps from what occurs on web versions, productivity will increase for consumers especially those who have signed up for a Google One premium plan. Additionally, a Gemini button within Gmail’s three-dot menu on Android suggests further incorporation of AI-driven features in the app.

Gemini, Google’s AI-based platform, is surfacing on various Google services including Gmail. Although it received criticism for its style of generating images, Gemini remains a critical part of Google Workspace apps that include reply suggestions and now email summaries.

Gemini also appears inside the app within the three-dot menu

The exact functionality of the Gemini button in the three-dot menu has yet to unravel. But, speculations are that it provides access to Google’s AI chatbot and other related features within in app. These enhancements are just a few examples of how Google uses AI to improve user experience and productivity across its services.

Generally speaking, this integration points to Google’s emphasis on innovation and continuous improvement through the use of AI. Email summaries as well as other associated Gemini functionalities will become an integral part of gmail in the coming days.

As these updates await release, users can look forward to an improved Android experience with respect to e-mail management. The new features will make Gmail efficient thus reflecting Google’s mission statement regarding simplification and improvement of everyday tasks through AI.


[ad_2]
Source link

NYC Mayor defends AI chatbot accused of giving bad advice

0
[ad_1]

In response to reports indicating that MyCity may be providing wrong, incorrect, and misleading information, NYC Mayor Eric Adams has come out to defend it. For those unaware, the New York government recently introduced the MyCity AI chatbot to assist business owners with their queries.

NYC Mayor defends controversial AI Chatbot amid accusations of misleading guidance

Its aim is to help business owners who require non-stop access to information and guidance. Nevertheless, an investigation by The Markup, with support from Documented and The City local newsrooms found several instances where it gave incorrect legal advice.

For example, the bot wrongly claimed that landlords could discriminate against tenants on grounds of income and that employers can pocket some of their service provider’s tips; both practices are illegal in New York City.

Like other AI models driven by Microsoft’s Azure AI large language models, MyCity is prone to giving inaccurate or misleading pieces of information. Accordingly, Mayor Adams admitted wrongs about this tool by saying “It’s off in certain areas,” but he defended its general aim.

He said that by nature, this kind of technology has to prove itself in real-life settings to highlight problems and solutions thereof. However, subsequent investigations revealed that the attempts failed since it was still feeding false claims such as ‘businesses no longer taking cash payments’ which contradicts NY law.

MyCity AI could prove inefficient requiring more dialogues with lawyers

Andrew Rigie who serves as a director at the NYC Hospitality Alliance praised the city’s attempt at using AI-powered technology while emphasizing accuracy. Rigie claimed that following erroneous suggestions from bots like this could ruin its goal and require more dialogues with lawyers.

Due to this controversy, the MyCity chatbot now includes a disclaimer warning users that replies may be unreliable or incomplete; hence they should not rely on them for legal or professional purposes.

This incident exemplifies how difficult it is to bring artificial intelligence into public services while also stressing the need for accuracy and dependability when guiding on a legal basis. As the efforts continue toward improving the MyCity chatbot, various stakeholders are still on the lookout for any challenges that may need addressing.


[ad_2]
Source link

Pixel users will soon be able to easily “Lookup” unknown numbers from their call log

0
[ad_1]
If you are tired of seeing those mysterious and spammy calls invading your call log, you are in luck — if you have a Google Pixel, that is. Google is currently working on a new feature for Pixel phones called “Lookup”, and it’s designed to put an end to the guessing game when it comes to phone calls.
Think of Lookup as a search tool for your calls. It lets you run a quick online search on those unknown numbers directly from your phone app, which means no more copying and pasting into a browser phone number search tool. This was initially spotted over in Japan, and now it looks like “Lookup” is being tested in the beta version of Google’s Phone app, as found by AssembleDebug (via PiunikaWeb).

Using it sounds super simple. Just head to your recent calls, tap the unknown number, and check out the options. “Lookup” should be the last option on the right side. Tap that, and you’ll see a list of apps that can search the web (including trusty Google Search). Choose your preferred app and search for the unknown number.

Image Credit: PiunikaWeb

It should be noted that “Lookup” won’t magically reveal the identity of every random caller. It’s really best for finding businesses or flagging pesky numbers that have popped up on scam alert websites. So, it’s not quite a full replacement for hardcore caller ID apps like Truecaller, yet.

There is no official timeline from Google on when “Lookup” will hit Pixels everywhere. Since it’s in testing and can only be seen by enabling some hidden flags, it might not be ready until the next feature drop or maybe even not until Android 15 hits later this fall. In the meantime, it’s best to exercise caution when it comes to dealing with scammers, which should be a rule to be followed regardless of whether you have a handy lookup tool or not.

[ad_2]
Source link

Cookie consent choices are just being ignored by some websites

0
[ad_1]

In news that is, sadly, unlikely to shock you, new research indicates that many websites ignore visitors’ choices to refuse cookies and collect their data anyway.

Researchers at the University of Amsterdam (UvA) analyzed 85,000 European websites and came to the conclusion that 90% of them violated at least one privacy regulation.

graph showing percentages of cookie privacy violations
Image courtesy of UvA

Cookies are bits of data that websites save on your computer when you look at a page, view an image, download a file, or interact with them in any other way. Cookies help websites remember you, which is often useful, particularly if you are logging in to a website, but they can also be used for things that some users don’t like, such as tracking. Tracking cookies are used by marketers to target you with ads that may interest you based on your browsing habits.

Working with researchers from Swiss university ETH Zürich, the team from UvA created a machine-learning tool that allowed them to analyze 100,000 websites. The main goal was to compare what information websites said they would gather with what they actually did. The researchers found an enormous number of privacy violations.

To make the data a bit more insightful, they discriminated between “naive” violations and deliberate violations.

Naive violations are things like not offering a choice to reject cookies (affecting 57% of sites), and forgetting to ask for permission to store cookies (which occurred on 32% of websites visited by Europeans). Forgetting to ask for permission, or making it very hard to reject cookies, is very easy to spot, yet several major website owners have already been fined for violations like this.

But then we enter the realm of deliberate privacy violations. Of the websites that offered visitors a choice, 65% used tracking cookies, even if visitors chose to reject them. In many cases, websites created the cookies even before the visitor had the chance to make their choice.

More than 77% of the websites chose to interpret closing a cookie notification dialog as user consent.

On top of this, many websites also used so-called “dark patterns” to manipulate visitors into making the site’s preferred choice. Dark patterns, also known as deceptive design patterns, occur when a user interface has been crafted to nudge or trick users into doing things they didn’t set out to do.

Dark patterns are not subliminal messaging or visual or auditory stimuli that the conscious mind cannot perceive, although advertisers have been accused of using those as well. It’s more like making the accept button bright and easy to find and the reject button dark, smaller, or harder to read.

The researchers came to the conclusion that the way the cookie consent system is working is far from satisfactory. Small websites don’t have the technical and legal knowledge to comply, and some others are simply choosing to ignore or bend the rules.

And warnings to website owners seem to fall on deaf ears. Since March 31, 2021, when the deadline set for websites and mobile applications to comply with the new rules on cookies expired, the French privacy watchdog Commission Nationale de l’Informatique et des Libertés (CNIL) has adopted nearly 100 corrective measures (orders and sanctions) related to non-compliance with the legislation on cookies.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.


[ad_2]
Source link