Thursday , May 13 2021

Academicians NIPS 2018 Optical Challenge Results Announced: CMU Xingbo wins two champions | Standard | Xing Bo | Challenge



From the medium

Author: Wieland Brendel

Heart of the machine

Participation: Zhang Qian, Wang Shuting

Today, the results of the NIPS 2018 Anti-Optical Challenges have been announced. The game is divided into three units: defense, non-targeted attacks and targeted attacks. The CMU Xingbo team won two leagues, the other champion was captured by the LIVIA team from Canada, and the Tsinghua TSAIL team won the Untargeted Attack first place. This article describes the outline of the method for these groups, but the details will be revealed at the NIPS Competition Seminar on December 7, 9: 15-10: 30.

NIPS 2018 Addressing Optical Challenge Address: https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track

Today, the results of the NIPS 2018 NIPS Adverse Vision Challenge 2018 were announced, with more than 3,000 participating teams submitting more than 3,000 attack models and methods. This year's competition focused on real-world scenarios, with little access to the model (up to 1000 per sample). The model returns only the final result it gives instead of the gradient or confidence rating. This approach simulates the typical threat scenarios faced by the development of mechanical learning systems and is expected to promote the development of effective decision-making methods and to create more robust models.

The integrated piece model on the CrowdAI platform.

All winners perform at least one order of magnitude better than a baseline (such as the transition from a regular model or a marginal attack) (based on the median magnitude of the L2 disorder). We called for the outline of their approach to the top three positions of each game (defense, non-targeted attack, targeted attack). The winners will present their approach to the NIPS Competition Seminar on December 7 at 9: 15-10: 30.

The common theme of attacking the winners is the low-frequency version of the marginal attack and the combination of different defense methods as an alternative model. In the piece, the winners used a new robust model approach (details may not be known until the seminar) and a new revolutionary attack L2 based on a blueprint for combat training. In the coming weeks, we will post again to publish more details about the results, including the representation of the specimen created for the defense model. The winning team will be announced in a few weeks.

Defense

First position: Petuum-CMU (codenamed "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributed equally represented), respectively, by Petuum Inc. The company, Carnegie Mellon University, Virginia University.

To learn the powerful network powerful for the counter sample, the authors analyzed the model-generating performance of the specimen-resistant model. Based on his analysis, the author proposes a new formula to learn powerful models with generalization and robustness guarantees.

Second place: Wilson Team (not yet received by the team)

Third place: The LIVIA team (code-named "Jerome R" on the leaderboard)

Author: Jérôme Roni & Luiz Gustavo Hafemann, from Montreal, Quebec, Canada Higher Technical School (ETS Montreal, Canada)

The authors trained a powerful model using the proposed new Decoupled Direction and Norm (DDN) iteration, which is fast enough to be used in training. In each training step, the author finds a sample of collision (using DDN) near the decision boundary and minimizes the cross entropy of this example. The architecture of the model has not changed and does not affect the reflection time.

Non-targeted attack

First position: the LIVIA group (codenamed "Jerome R" in the table)

Author: Jérôme Roni & Luiz Gustavo Hafemann, from Montreal, Quebec, Canada Higher Technical School

The attack method is based on a number of proxy models (including the new attack method proposed by the author – a powerful DDN training model). For each model, the author chose two directions to attack: the level of cross-entropy loss of the original category and the direction given by the execution of the DDN attack. For each direction, the author performs a binary search on the rule to find the decision boundary. The writer gets the best attack and improves it by the method of marginal attack on Decision-Based Anti-fascist Attacks: Reliable Attacks against Black-Box Learning Learning Models.

Second place: TSAIL (code-named "csy530216" on the leaderboard)

Author: Shuyu Cheng & Yinpeng Dong

The author uses a heuristic search algorithm to improve the sample, which is similar to the marginal attack method. The BIM attack used the baseline in Adversarial Logit Pairing to migrate and find the starting point. In each iteration, a random Gaussian distribution is sampled with a diagonal covariance matrix updated by past successful tests to simulate the search direction. The writer limits the disturbance to the central region 40 * 40 * 3 of image 64 * 64 * 3. First creates a noise of 10 * 10 * 3 and then adjusts it to 40 * 40 * 3 using bilinear interpolation. Restricting search space makes the algorithm more efficient.

Third place: Petuum-CMU (codenamed "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributed equally represented), respectively, by Petuum Inc. The company, Carnegie Mellon University, Virginia University.

The authors have incorporated different robust models and different countermeasures against attacks from various distance measurements from Foolbox to create burglaries. In addition, they chose the best attack method to minimize the maximum distance when attacking strong models with different distance measurements.

Targeted attack

First position: Petuum-CMU (codenamed "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongyang Zhang *, Susu Xu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* contributed equally represented), respectively, by Petuum Inc. The company, Carnegie Mellon University, Virginia University.

The authors used Foolbox to incorporate different resistant models and different methods against the sample to create anti-disorders. They found that the integration approach makes the targeting model more effective for several powerful models.

Second place: Fortis (code-named "ttbrunner" on the leaderboard)

Author: Thomas Brunner & Frederik Diehl, Institute GmbH from Germany Fortiss

This attack method is similar to a marginal attack, but no random random distribution is taken. In addition, the author uses a low-frequency function that migrates well and is not easily filtered by the defender. The author also uses the projection gradient of the substitute model as a priori for sampling. In this way, they combine the advantages of both (PGD and border assault) into a flexible and efficient sampling method.

Third place: The LIVIA team (code-named "Jerome R" on the leaderboard)

Author: Jérôme Roni & Luiz Gustavo Hafemann, from Montreal, Quebec, Canada Higher Technical School

from

<! –

from

Disclaimer: Sina's exclusive manuscript, unauthorized reproduction is prohibited.

->


Source link