Harry Potter Dreams Come True with Facebook AI Invisibility Cloak

Harry Potter Dreams Come True with Facebook AI Invisibility Cloak

This "Invisibility Cloak" powered by AI in the real world will protect you from AI cameras.

Historically, invisibility cloaks have only existed in science fiction, but thanks to Harry Potter, individuals may soon be able to fulfill their fantasies. Recently, a group of University of Maryland, College Park researchers collaborated with Facebook AI to create a genuine "invisibility cloak." The cloak, which is actually a vibrant sweater, instantly erases you from a machine's field of view. According to a Gagadget story, the study team used adversarial patterns on the sweater to hide the individual from most popular object detectors. Simply, a human becomes "invisible" in the eyes of AI models that recognize persons when wearing a sweater.

The developers originally set aimed to test machine learning systems for flaws, but the outcome was a print on clothing that AI cameras can't see. With the caption, "This sweater produced by the University of Maryland leverages "adversarial patterns" to become an invisibility cloak against AI," a user on Reddit posted a video of the test footage. "In the office or on the run, this chic pullover is a terrific way to remain warm this winter. It has a contemporary fit, a stay-dry microfleece lining, and a competitive pattern that eludes the majority of object detectors. For demonstration, a pattern trained on the COCO dataset with a properly crafted objective is used to avoid the YOLOv2 detector "the group created.

The SOCO dataset, which is used to train the computer vision system YOLOv2, was used by the researchers to find a pattern that aids in recognizing a person. The identical process produced the opposing pattern, which was then turned into an image—a print on a sweater. The owner of such a sweater can therefore avoid being discovered by security systems.

The team's explanation of the project can be found on the University of Maryland website "Instead of detectors, which pinpoint specific items inside an image, the majority of research on actual adversarial attacks has concentrated on classifiers, which give an overall label to the entire image. Thousands of "priors"—potential bounding boxes—with varying positions, sizes, and aspect ratios are taken into account by detectors as they search through the image. It is far more difficult to trick an object detector's single output than it is to trick a classifier's single output since an adversarial example must successfully trick every prior in the image."

The "magic sweater" captured the attention of many Redditors, but several questioned its efficacy. One user made fun of it by joking that it was "so ugly even AI doesn't want to view it." One more said, "I mean." It could be a stretch to claim invisibility. He is still visible to the camera, but not entirely. Am I mistaken in assuming that the police might use this to track down criminals? The YOLOv2-targeting adversarial sweatshirts only had a 50% success rate in the wearable test, according to a Hackster report

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net