Proactive Computing | Optimizing IT for usability, performance and reliability since 1997

Category: #ArtificialIntelligence #AI (Page 1 of 2)

Auto Added by WPeMatico

The Eyes Have It: Scientists Can Spot Deepfakes with a New AI Tool

Deepfake portraits with cornea analysis results underneathShu Hu/Yuezan Li/Siwei Lyu, University of Buffalo

Thanks to a new AI tool created by computer scientists at the University of Buffalo, we can now spot portrait-style deepfakes with 94% accuracy. How does the tool do this? By analyzing the patterns of light reflection seen on each of the photographed person’s corneas, which should look the same, not different.

Corneas have a mirror-like surface that should have a similar reflection shape on them caused by the lighting of the room or area they’re in. In real photos, the eyes will always have a near-identical reflection pattern. However, deepfake images—which are created by generative adversarial networks (GANs)—usually fail to accurately synthesize the resemblance and instead generate unique and inconsistent reflections on each cornea, sometimes even with mismatched locations.

The AI tool, then, maps out the face, scans the eyes, and analyzes the reflection in each eye. It then generates a similarity metric score that determines the likelihood of the image being an actual deepfake. The lower the score, the higher the possibility an image is a deepfake. The tool proved effective when scanning deepfakes on This Person Does Not Exist, a website filled with images of fake people using the StyleGAN2 architecture. 

However, the scientists that created the tool did note it has some limitations, the primary of which is that it relies on there being a reflected light source visible in both eyes. If someone is winking or blinking, it likely won’t work; nor will it if the subject is partially turned and not looking directly at the camera, as it’s only proved successful on portrait images. Additionally, anyone proficient enough in Photoshop may be able to edit out these inconsistencies, which would likely render the AI tool useless.

Despite these limitations, the tool still marks a big step forward for this type of technology. It won’t bust sophisticated deepfakes any time soon, but it can spot simpler ones and lay the foundation for more powerful detection technology in the future to go alongside our current capabilities to detect audio and video deepfakes.

via The Next Web

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By: Suzanne Humphries

How to use AI to manage your brand’s social media campaigns

Our pandemic-plagued 2020 didn’t just change our daily routines and patterns. It also accelerated new trends that have been ramping up for years. For example, our time spent consuming digital content skyrocketed last year. Most U.S. adults engage with almost 8 hours of digital media every day, a 15 percent increase over 2019.  — Read the rest

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By: Boing Boing’s Shop

Google AI classifies baking recipes and explains its predictions

One goal of AI researchers is to figure out how to make machine learning models more interpretable so researchers can understand why they make their predictions. Google says this is an improvement from taking the predictions of a deep neural network at face value without understanding what contributed to the model output. Researchers have shown how to build an explainable … Continue reading

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By: Shane McGlaun

DeepMind’s latest AI can master games without being told their rules

e9a26a30-44ca-11eb-af9e-fbbada99bdf7In 2016, Alphabet’s DeepMind came out with AlphaGo, an AI which consistently beat the best human Go players. One year later, the subsidiary went on to refine its work, creating AlphaGo Zero. Where its predecessor learned to play Go by observing amate…

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By:

An AI is livestreaming a never-ending bass solo on YouTube

ebac0200-446b-11eb-aa5f-05fe34b22d89Even the most dedicated musicians have to put down their instruments sometimes, but on YouTube, you can listen to a bass solo that keeps going and going. Dadabots, which is also behind an endless death metal stream, used a recurrent neural network (R…

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By:

Massachusetts lawmakers vote to pass a statewide police ban on facial recognition


Massachusetts lawmakers have voted to pass a new police reform bill that will ban police departments and public agencies from using facial recognition technology across the state.

The bill was passed by both the state’s House and Senate on Tuesday, a day after senior lawmakers announced an agreement that ended months of deadlock.

The police reform bill also bans the use of chokeholds and rubber bullets, and limits the use of chemical agents like tear gas, and also allows police officers to intervene to prevent the use of excessive and unreasonable force. But the bill does not remove qualified immunity for police, a controversial measure that shields serving police from legal action for misconduct, following objections from police groups.

Lawmakers brought the bill to the state legislature in the wake of the killing of George Floyd, an unarmed Black man who was killed by a white Minneapolis police officer, since charged with his murder.

Critics have for years complained that facial recognition technology is flawed, biased and disproportionately misidentifies people and communities of color. But the bill grants police an exception to run facial recognition searches against the state’s driver license database with a warrant. In granting that exception, the state will have to publish annual transparency figures on the number of searches made by officers.

The Massachusetts Senate voted 28-12 to pass, and the House voted 92-67. The bill will now be sent to Massachusetts governor Charlie Baker for his signature.

Kade Crockford, who leads the Technology for Liberty program at the ACLU of Massachusetts, praised the bill’s passing.

“No one should have to fear the government tracking and identifying their face wherever they go, or facing wrongful arrest because of biased, error-prone technology,” said Crockford. In the last year, the ACLU of Massachusetts has worked with community organizations and legislators across the state to ban face surveillance in seven municipalities, from Boston to Springfield. We commend the legislature for advancing a bill to protect all Massachusetts residents from unregulated face surveillance technology.”

In the absence of privacy legislation from the federal government, laws curtailing the use of facial recognition are popping up on a state and city level. The patchwork nature of that legislation means that state and city laws have room to experiment, creating an array of blueprints for future laws that can be replicated elsewhere.

Portland, Oregon passed a broad ban on facial recognition tech this September. The ban, one of the most aggressive in the nation, blocks city bureaus from using the technology but will also prohibit private companies from deploying facial recognition systems in public spaces. Months of clashes between protesters and aggressive law enforcement in that city raised the stakes on Portland’s ban.

Earlier bans in Oakland, San Francisco and Boston focused on forbidding their city governments from using the technology but, like Massachusetts, stopped short of limiting its use by private companies. San Francisco’s ban passed in May of last year, making the international tech hub the first major city to ban the use of facial recognition by city agencies and police departments.

At the same time that cities across the U.S. are acting to limit the creep of biometric surveillance, those same systems are spreading at the federal level. In August, Immigration and Customs Enforcement (ICE) signed a contract for access to a facial recognition database created by Clearview AI, a deeply controversial company that scrapes facial images from online sources, including social media sites.

While most activism against facial recognition only pertains to local issues, at least one state law has proven powerful enough to make waves on a national scale. In Illinois, the Biometric Information Privacy Act (BIPA) has ensnared major tech companies including Amazon, Microsoft and Alphabet for training facial recognition systems on Illinois residents without permission.

Updated with comment from the ACLU of Massachusetts.

Let’s block ads! (Why?)

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By: Taylor Hatmaker

Facial Recognition Not Just for People – Bears and Cows, Too

Facial-Recognition-Bears-Cows-Featured.j Facial recognition is being used in many different ways. We use it to log in to our phones and computers, and the police can use it to track criminals. There are more uses for it as well, such as with animals. Facial recognition is already being used to recognize bears and cows. Facial Recognition for Grizzly Bears Bear biologist Melanie Clapham studies grizzly bears in Knight Inlet in British Columbia, Canada. She has learned to differentiate between them by using “individual characteristics,” such as an ear nick or nose… Read more14106266.gif

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By: Laura Tucker

Microsoft’s AI is now better at image captioning than humans

433d7370-0e2d-11eb-bbcd-ab2a0273c3f2Describing an image accurately, and not just like a clueless robot, has long been the goal of AI. In 2016, Google said its artificial intelligence could caption images almost as well as humans, with 94 percent accuracy. Now Microsoft says it’s gone e…

Proactive Computing found this story and shared it with you.
The Article Was Written/Published By:

« Older posts