Powered by Blogger.
Home » » Module 03: Open Source Intelligence (OSINT) Methodology 1

Module 03: Open Source Intelligence (OSINT) Methodology 1

Written By AKADEMY on Thursday, July 4, 2019 | 9:57 PM

Exercise 1: Performing Information Gathering on a Target Organization

Scenario

Penetration testing is much more than just running exploits against vulnerable systems. In fact, a penetration test begins before penetration testers have even made contact with the victim’s systems. Rather than blindly throwing out exploits and praying that one of them returns a shell, a penetration tester meticulously studies the environment for potential weaknesses and their mitigating factors. By the time a penetration tester runs an exploit, he or she is nearly certain that it will be successful. Since failed exploits can, in some cases, cause a crash or even damage the target system, or at the very least make the target un-exploitable in the future, penetration testers do not get the best results. Moreover, they cannot deliver the most thorough report to their clients, if they blindly turn an automated exploit machine on the target network with no preparation.
A penetration tester collects a company’s information such as internal and external links of the company’s website, people working in the company, its geographical location, DNS information, competitive intelligence, and network range etc. This information is collected in order to search for vulnerabilities, so as to sniff and exploit valuable information. In order to become an expert penetration tester and security auditor, you must know the various techniques required to gather a company’s information.
Lab Duration25 Minutes
  1. Click Windows Server 2012 (External Network). Click Ctrl+Alt+Delete.
    Screenshot
  2. In the password field click Pa$$w0rd and press Enter
    You can use the Type Password option from the Commands menu to enter the password.
    Screenshot
  3. Navigate to E:\ECSAv10 Module 03 Open Source Intelligence (OSINT) Methodology\Web Data Extractor, double-click wde.exe file and follow the steps to install Web Data Extractor.
    If an Open File - Security Warning window appears click Run.
    Screenshot
  4. Once the installation process is completed, check Launch Web Data Extractor option and click Finish to launch the tool automatically.
    Screenshot
  5. Web Data Extractor (Web Data Extractor 8.3) main window appears as shown in the screenshot.
    Screenshot
  6. Click New to start a new session.
    Session settings window appears as shown in the screenshot.
    Screenshot
  7. Type a URL http://www.luxurytreats.com in the Starting URL field.
    Select the Retrieval depth radio button and check Stay within full URL option.
    Check all the options under Save data section and click OK.
    If you do not find the options, hover the mouse cursor all over the Session settingswindow until all the options appear.
    Screenshot
  8. Click Start to initiate the data extraction.
    Web Data Extractor will start collecting the information (emails, phones, faxes, etc.). Once the data extraction process is completed, click anywhere on the window. An Information dialog box appears. Click OK.
    It takes sometime for the data acquisition to complete.
    Screenshot
  9. The extracted information can be viewed by opening each of the tabs (Meta tags, Emails, Phones, etc.)
    Screenshot
  10. Select the Meta tags tab to view the URL, Title, Keywords, Description, Host, Domain, and Page size information.
    Screenshot
  11. Select Emails tab to view information related to emails such as the Email, Name, URL, Title, Host and Keywords density, etc.
    Screenshot
  12. Select the Phones tab to view the contact information such as Phone number, Source, Tag, URL, Title, Host, Keywords density and Keywords on Page.
    Screenshot
  13. Select the Faxes tab to view the information related to fax like Fax, Source, Tag, URL, Title, Host, Keywords density and Keywords on Page.
    Screenshot
  14. Select the Merged list tab, then click on Generate Merged List for Extracted Data icon to view the information like URL, Host, Domain, Title, Description, Keywords, Email, Phone, Phone source, Phone tag, Fax, Fax source and Fax tag on the page.
    Screenshot
  15. Select the Urls tab to view all the URLs extracted.
    Screenshot
  16. Select the Inactive Sites tab to view all the extracted URLs of the inactive sites.
    In the Inactive Sites tab, Web Data Extractor lists the hosts on which it could not access any webpages.
    In this lab, there are no inactive sites discovered.
    Screenshot
  17. Go to File and click Save session to save the session.
    You can also press Ctrl+S on the keyboard to save the session.
    Screenshot
  18. Save session dialog box appears.
    Specify luxurytreats.com as the session name under Please specify session name: text field and click OK.
    A session name will be assigned by default. You can either change the name or continue the lab with default name.
    Screenshot
  19. Click Meta tags tab and then click the floppyicon located at the top left corner of the Meta tags section.
    Screenshot
  20. As a demo version of Web Data Extractor is being used, an Information Pop-up appears stating that you cannot save more than 10 records in the demo version. Click OK to close the pop-up.
    Screenshot
  21. Save Meta tags window appears, choose a file format and click Save.
    The file format chosen in this lab is HTML.
    Screenshot
  22. Navigate to the location C:\Program Files (x86)\WebExtractor\Data\luxurytreats.comand double-click metatags_data.html to view the report.
    A pop up appears - How do you want to open this type of file (.html)?. Choose the application to view. In this lab, Firefox web browser has been chosen.
    Screenshot
  23. The report appears in default browser as shown in the screenshot.
    Screenshot
  24. In the same way, you need to save the other information related to the target, such as emails, phones and so on.
    Website data is successfully collected using Web Data Extractor. Close the Application as well as all the windows that were opened.
  25. Install WebSite-Watcher tool in order to monitor web updates.
    WebSite-Watcher is a program with closed code that tracks changes in the user-defined web pages.
  26. Navigate to E:\ECSAv10 Module 03 Open Source Intelligence (OSINT) Methodology\WebSite-Watcher, double-click wswsetup.exe and follow the steps to install WebSite-Watcher tool.
    If an Open File - Security Warning pop-up appears, click Run.
    Screenshot
  27. On completion of installation, check Launch WebSite-Watcher, and click Finish. This will automatically launch the application.
    Screenshot
  28. WebSite-Watcher main window appears on the screen along with a Welcome wizard, choose the preferred language and click Next.
    Screenshot
  29. Click Finish button on Welcome to WebSite-Watcher window.
    Screenshot
  30. WebSite-Watcher License pop-up appears, choose any of the edition options and click Continue.
    Screenshot
  31. WebSite-Watcher main window appears as shown in the screenshot.
    Screenshot
  32. Click New icon in the toolbar.
    Wizard: New Bookmark window appears.
    Select The page can be accessed directly option and enter http://www.luxurytreats.comas the target company’s URL in URL: text field, and then click Next.
    Screenshot
  33. The application initializes the page and Select Page Type section appears in the Wizard: New Bookmark window.
    Select Webpage and click Next.
    Screenshot
  34. Finished section appears, in the Wizard: New Bookmark window. Click Finish.
    Screenshot
  35. Double-click the target link http://www.luxurytreats.com (third in list) to monitor the updates made in the website.
    Screenshot
  36. Web updates are successfully monitored using WebSite-Watcher.
    Close the application as well as all the windows that are open.
    As we have done this lab on the local website, you will not see updates. If you try this lab on frequently updated sites (such as CNN, BBC, and etc.), then you can able to monitor the updates according to it.
    Screenshot
  37. Install WinHTTrack Web Site Copier in order to mirror a specific website.
    HTTrack Website copier allows you to download a World Wide Web site to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer.
  38. Navigate to E:\ECSAv10 Module 03 Open Source Intelligence (OSINT) Methodology\WinHTTrack Website Copier, double-click httrack-3.49.2.exe and follow the steps to install HTTrack tool.
    If an Open File - Security Warning window appears, click Run.
    While installaing WinHTTRack Website Copier, it will ask you to register the application, uncheck the option and click Next (Optional).
    Screenshot
  39. Once the installation is complete, check Launch WinHTTrack Website Copier, and uncheck View history.txt file option, and click Finish, so that application will launch automatically.
    Screenshot
  40. About WinHTTrack Website Copier pop-up appears, along with WinHTTrack Website Copier main window (WinHTTrack Website Copier -[New Project 1]).
    Click OK on the pop-up.
    Screenshot
  41. WinHTTrack Website Copier main window appears, click Next.
    Screenshot
  42. Enter luxurytreats in the New Project name:field, type info in the Project category: drop-down list, leave the Base path field to default, and click Next.
    Screenshot
  43. Leave Action: field to default, type http://www.luxurytreats.com in the Web Address: (URL) field, and leave URL list (.txt):field to default and then click Set options button in Prefrences and mirror options.
    Screenshot
  44. Clicking the Set options launches the WinHTTrack window.
    Click Scan Rules tab, and select the check boxes for the file types as shown in the screenshot, and click OK and then click Next button in Mirroring Mode wizard.
    Screenshot
  45. Check Disconnect when finished option, and leave the other options to default. Click Finish.
    Screenshot
  46. HTTRack, starts copying content from the target company’s website as shown in the screenshot.
    It will take time to copy all the web site content and pages.
    Screenshot
  47. Once the tool completes mirroring the website, click Finish.
    View error log button will flash after completion, ignore the error.
    Screenshot
  48. WinHTTrack Website Copier main window appears, Exit the application.
    Navigate to the location C:\My Web Sites\luxurytreats and open the index.html file in a web browser to view the mirrored website.
    In this lab, the browser used to view the file is Firefox.
    Screenshot
  49. The mirrored luxurytreats website appears in the default web browser. Browse various webpages in the website in order to examine the website.
    Screenshot
  50. Website mirroring is successfully done using WinHTTrack Website Copier.
    Close the web browser and all the windows that are open.
    Screenshot
In this lab, you have learned different techniques to gather information about a company
Share this article :

0 comments:

 
Trung Tâm Đào Tạo An Toàn Thông Tin Học Hacker Mũ Xám Online | Học An Ninh Mạng Trực Tuyến | CEH VIỆT NAM
Copyright © 2013. HACKER MŨ XÁM - All Rights Reserved
Web Master @ Võ Sĩ Máy Tính
Contact @ Đông Dương ICT