Summary Universal Neural-Cracking-Machines Self-Configurable Password Models arxiv.org
21,020 words - PDF document - View PDF document
One Line
The paper presents a universal password model that adjusts its guessing strategy according to the target system, with the results indicating that both seeded and tailored models outperform the baseline, with seeded models being slightly more effective.
Slides
Slide Presentation (11 slides)
Key Points
- The concept of a "universal" password model is introduced, which can automatically change its guessing strategy based on the target system.
- Password models are important for password security and have various applications such as penetration testing and password strength meters.
- Attention mechanisms are used in password models to compute a function of each query vector and all the other vectors in a set.
- The Universal Neural-Cracking-Machines (UNCM) is a combination of two deep neural networks: the Conditional Password model and the Configuration Encoder.
- The UNCM uses an encoder to produce a configuration seed with a sub-second latency, which can be applied to the conditional password model.
- The UNCM outperforms manually configured password models in guessing passwords and can detect weak passwords missed by the universal approximation.
- Privacy can be achieved by making the seed differentially private, and the privacy level is quantified using a noise multiplier and evaluated for different credential databases.
- The first self-configurable password model is presented, which uses auxiliary data to adapt to the target password distribution at inference time.
Summaries
36 word summary
This paper introduces a universal password model that adapts its guessing strategy based on the target system. Results show that both seeded and tailored models perform better than the baseline. Seeded models are slightly more effective.
89 word summary
This paper introduces the concept of a "universal" password model that can adapt its guessing strategy based on the target system. The model uses deep learning to capture the correlation between users' auxiliary data and their passwords. Password models are important for password security and
Results from Figure 8 demonstrate that both seeded password models and manually tailored ones perform better than the baseline. Seeded models are slightly more effective, indicating that the prior learned from users' email addresses is more informative than the language prior exploited. Privacy is
917 word summary
We introduce the concept of a "universal" password model that can automatically change its guessing strategy based on the target system. The model uses deep learning to capture the correlation between users' auxiliary data (such as email addresses) and their passwords, creating a
Different training sets require different settings for password models. The average end-user finds it challenging to meet the requirement of having well-trained and calibrated password models. This paper introduces the concept of a universal password model, called Universal Neural-Cracking-Machines (
Password models are important for password security and have various applications such as penetration testing and password strength meters. Password Strength Meters (PSMs) use password models to rank passwords based on their security. Autoregressive password models segment passwords into atomic components and
In this paper, the authors discuss the use of attention mechanisms in password models. They explain that an attention mechanism is a function that operates on sets of vectors. The mechanism computes a function of each query vector and all the other vectors in a set of
Universal Neural-Cracking-Machines (UNCMs) are introduced as a concept in this document. It is highlighted that password strength is not a universal property and varies depending on the context in which the password is created and used. Different password distributions
The Universal Neural-Cracking-Machines Self-Configurable Password Models (UNCM) is a combination of two deep neural networks: the Conditional Password model and the Configuration Encoder. The Conditional Password model is a probabilistic password model that can be altered
The training set used for the proposed model is a credential leak collection called Cit0day.in. In November 2020, Cit0day.in experienced a security incident that resulted in the leakage of over 22,500 previously breached credential databases. The
The configuration encoder consists of a sub-encoder and a mixing-encoder, with each input being processed differently. The provider and domain strings are first discretized and embedded using separate embedding matrices. Strings that appear with low frequency are excluded and mapped to a
The Universal Neural-Cracking-Machines (UNCM) use an encoder to produce a configuration seed with a sub-second latency. The configuration seed can be applied to the conditional password model. The conditional password model is implemented by extending a non-conditional
We train a Universal Neural-Cracking-Machine (UNCM) to generate passwords using a configuration encoder and a password model. The encoder retrieves information from auxiliary data to guide the password model in generating passwords. The models are trained with multiple credential databases
The pre-trained Universal Neural-Cracking-Machines (UNCM) outperform manually configured password models in guessing passwords. The UNCMs are compared to a baseline model and are found to have consistent but varying performance gains. The seeded password models generated
The excerpt provides additional results comparing password models and dynamic dictionary attacks. The passwords models f ?I? and f ?? have the same number of parameters but different access to configuration seeds. The UNCM is able to detect weak passwords missed by the universal approximation
Results from Figure 8 show that both seeded password models and manually tailored ones outperform the baseline. Seeded models perform slightly better, suggesting that the prior learned by the UNCM from users' email addresses can be more informative than the language prior exploited
We achieve privacy by making the seed differentially private. The training set used to fit the model does not need protection. However, if the training set is assumed to be private, standard differentially private stochastic gradient descent can be used. To achieve different
The excerpt discusses the privacy level and utility loss of a differential private configuration encoder. The privacy level is quantified using a noise multiplier and evaluated for different credential databases. The privacy budget is determined based on the size of the input subset. The utility loss
A simplified depiction of a Unbounded Universal Neural-Cracking-Machine (UNCM) is shown in Figure 11. This model condenses auxiliary information into a fixed-size vector using a mixing encoder, allowing for compact seeded password models and differential privacy
The first self-configurable password model is presented, which uses auxiliary data to adapt to the target password distribution at inference time. This addresses a major problem in the application of password security techniques in the real world. The use of leaked credentials raises ethical concerns
This excerpt contains a list of references and sources related to password cracking and security. The sources include articles, preprints, conferences, and studies from various years. Some key topics covered include leveraging personal information for password cracking, password strength evaluation, password vault
This excerpt includes a list of references that are cited in the document. The references cover a range of topics related to password security, including password strength meters, deep learning approaches for password guessing, the accuracy of password-cracking algorithms, differential privacy, and
Fast, lean, and accurate: Modeling password guessability using neural networks. In 25th USENIX Security Symposium (USENIX Security 16), pages 175-191, Austin, TX, August 2016. USENIX
Blase Ur et al. conducted a study on password strength meters and the creation of secure passwords. They evaluated the performance of different password models and their impact on password guessability. The study included data from various sources, such as the CHI Conference
The Universal Neural-Cracking-Machines (UNCM) have better performance in guessing weak passwords that are language-specific. The UNCM is compared to other password models, including PCFG, Markov Chains, and semantic-PCFG. The analysis
Results for seeded password models are reported, which outperform dynamic dictionary attacks. The ADaMs attack is more efficient and applicable to real-world guessing attacks, while the Melicher model has limited application. Data cleaning steps are described for the Cit0day